==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: lodemon-755c6d9977-9wwrg Namespace: xlou Priority: 0 Node: gke-xlou-cdm-default-pool-f05840a3-2nsn/10.142.0.46 Start Time: Sat, 12 Aug 2023 21:26:46 +0000 Labels: app=lodemon app.kubernetes.io/name=lodemon pod-template-hash=755c6d9977 skaffold.dev/run-id=770152e1-1da4-420f-9cf1-d617c5a4ffd3 Annotations: Status: Running IP: 10.106.45.86 IPs: IP: 10.106.45.86 Controlled By: ReplicaSet/lodemon-755c6d9977 Containers: lodemon: Container ID: containerd://14fa10d60ddafc4f6da803bcb6704f51f2dd6ce322d5701faf804bd078d2cf4e Image: gcr.io/engineeringpit/lodestar-images/lodestarbox:6c23848450de3f8e82f0a619a86abcd91fc890c6 Image ID: gcr.io/engineeringpit/lodestar-images/lodestarbox@sha256:f419b98ce988c016f788d178b318b601ed56b4ebb6e1a8df68b3ff2a986af79d Port: 8080/TCP Host Port: 0/TCP Command: python3 Args: /lodestar/scripts/lodemon_run.py -W default State: Running Started: Sat, 12 Aug 2023 21:26:47 +0000 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 2Gi Requests: cpu: 1 memory: 1Gi Liveness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Readiness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: SKAFFOLD_PROFILE: medium Mounts: /lodestar/config/config.yaml from config (rw,path="config.yaml") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m48fj (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: lodemon-config Optional: false kube-api-access-m48fj: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== 22:26:48 INFO 22:26:48 INFO --------------------- Get expected number of pods --------------------- 22:26:48 INFO 22:26:48 INFO [loop_until]: kubectl --namespace=xlou get deployments --selector app=am --output jsonpath={.items[*].spec.replicas} 22:26:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:48 INFO [loop_until]: OK (rc = 0) 22:26:48 DEBUG --- stdout --- 22:26:48 DEBUG 3 22:26:48 DEBUG --- stderr --- 22:26:48 DEBUG 22:26:48 INFO 22:26:48 INFO ---------------------------- Get pod list ---------------------------- 22:26:48 INFO 22:26:48 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=am --output jsonpath={.items[*].metadata.name} 22:26:48 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 22:26:48 INFO [loop_until]: OK (rc = 0) 22:26:48 DEBUG --- stdout --- 22:26:48 DEBUG am-55f77847b7-5xs2m am-55f77847b7-79tz5 am-55f77847b7-c4982 22:26:48 DEBUG --- stderr --- 22:26:48 DEBUG 22:26:48 INFO 22:26:48 INFO -------------- Check pod am-55f77847b7-5xs2m is running -------------- 22:26:48 INFO 22:26:48 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-5xs2m -o=jsonpath={.status.phase} | grep "Running" 22:26:48 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:48 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:48 INFO [loop_until]: OK (rc = 0) 22:26:48 DEBUG --- stdout --- 22:26:48 DEBUG Running 22:26:48 DEBUG --- stderr --- 22:26:48 DEBUG 22:26:48 INFO 22:26:48 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-5xs2m -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:26:48 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:48 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:48 INFO [loop_until]: OK (rc = 0) 22:26:48 DEBUG --- stdout --- 22:26:48 DEBUG true 22:26:48 DEBUG --- stderr --- 22:26:48 DEBUG 22:26:48 INFO 22:26:48 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-5xs2m --output jsonpath={.status.startTime} 22:26:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:48 INFO [loop_until]: OK (rc = 0) 22:26:48 DEBUG --- stdout --- 22:26:48 DEBUG 2023-08-12T21:17:22Z 22:26:48 DEBUG --- stderr --- 22:26:48 DEBUG 22:26:48 INFO 22:26:48 INFO ------- Check pod am-55f77847b7-5xs2m filesystem is accessible ------- 22:26:48 INFO 22:26:48 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-5xs2m --container openam -- ls / | grep "bin" 22:26:48 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:48 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:48 INFO [loop_until]: OK (rc = 0) 22:26:48 DEBUG --- stdout --- 22:26:48 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 22:26:48 DEBUG --- stderr --- 22:26:48 DEBUG 22:26:48 INFO 22:26:48 INFO ------------- Check pod am-55f77847b7-5xs2m restart count ------------- 22:26:48 INFO 22:26:48 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-5xs2m --output jsonpath={.status.containerStatuses[*].restartCount} 22:26:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:48 INFO [loop_until]: OK (rc = 0) 22:26:48 DEBUG --- stdout --- 22:26:48 DEBUG 0 22:26:48 DEBUG --- stderr --- 22:26:48 DEBUG 22:26:48 INFO Pod am-55f77847b7-5xs2m has been restarted 0 times. 22:26:48 INFO 22:26:48 INFO -------------- Check pod am-55f77847b7-79tz5 is running -------------- 22:26:48 INFO 22:26:48 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-79tz5 -o=jsonpath={.status.phase} | grep "Running" 22:26:48 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG Running 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-79tz5 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:26:49 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG true 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-79tz5 --output jsonpath={.status.startTime} 22:26:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG 2023-08-12T21:17:22Z 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO 22:26:49 INFO ------- Check pod am-55f77847b7-79tz5 filesystem is accessible ------- 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-79tz5 --container openam -- ls / | grep "bin" 22:26:49 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO 22:26:49 INFO ------------- Check pod am-55f77847b7-79tz5 restart count ------------- 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-79tz5 --output jsonpath={.status.containerStatuses[*].restartCount} 22:26:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG 0 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO Pod am-55f77847b7-79tz5 has been restarted 0 times. 22:26:49 INFO 22:26:49 INFO -------------- Check pod am-55f77847b7-c4982 is running -------------- 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-c4982 -o=jsonpath={.status.phase} | grep "Running" 22:26:49 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG Running 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-c4982 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:26:49 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG true 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-c4982 --output jsonpath={.status.startTime} 22:26:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG 2023-08-12T21:17:23Z 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO 22:26:49 INFO ------- Check pod am-55f77847b7-c4982 filesystem is accessible ------- 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-c4982 --container openam -- ls / | grep "bin" 22:26:49 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO 22:26:49 INFO ------------- Check pod am-55f77847b7-c4982 restart count ------------- 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-c4982 --output jsonpath={.status.containerStatuses[*].restartCount} 22:26:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG 0 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO Pod am-55f77847b7-c4982 has been restarted 0 times. 22:26:49 INFO 22:26:49 INFO --------------------- Get expected number of pods --------------------- 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou get deployment --selector app=idm --output jsonpath={.items[*].spec.replicas} 22:26:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG 2 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO 22:26:49 INFO ---------------------------- Get pod list ---------------------------- 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=idm --output jsonpath={.items[*].metadata.name} 22:26:49 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG idm-65858d8c4c-n7zrc idm-65858d8c4c-wd2fd 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO 22:26:49 INFO -------------- Check pod idm-65858d8c4c-n7zrc is running -------------- 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-n7zrc -o=jsonpath={.status.phase} | grep "Running" 22:26:49 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG Running 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-n7zrc -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:26:49 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG true 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-n7zrc --output jsonpath={.status.startTime} 22:26:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG 2023-08-12T21:17:23Z 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO 22:26:49 INFO ------- Check pod idm-65858d8c4c-n7zrc filesystem is accessible ------- 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-n7zrc --container openidm -- ls / | grep "bin" 22:26:49 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:49 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:49 INFO [loop_until]: OK (rc = 0) 22:26:49 DEBUG --- stdout --- 22:26:49 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:26:49 DEBUG --- stderr --- 22:26:49 DEBUG 22:26:49 INFO 22:26:49 INFO ------------ Check pod idm-65858d8c4c-n7zrc restart count ------------ 22:26:49 INFO 22:26:49 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-n7zrc --output jsonpath={.status.containerStatuses[*].restartCount} 22:26:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG 0 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO Pod idm-65858d8c4c-n7zrc has been restarted 0 times. 22:26:50 INFO 22:26:50 INFO -------------- Check pod idm-65858d8c4c-wd2fd is running -------------- 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-wd2fd -o=jsonpath={.status.phase} | grep "Running" 22:26:50 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG Running 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-wd2fd -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:26:50 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG true 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-wd2fd --output jsonpath={.status.startTime} 22:26:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG 2023-08-12T21:17:23Z 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO 22:26:50 INFO ------- Check pod idm-65858d8c4c-wd2fd filesystem is accessible ------- 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-wd2fd --container openidm -- ls / | grep "bin" 22:26:50 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO 22:26:50 INFO ------------ Check pod idm-65858d8c4c-wd2fd restart count ------------ 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-wd2fd --output jsonpath={.status.containerStatuses[*].restartCount} 22:26:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG 0 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO Pod idm-65858d8c4c-wd2fd has been restarted 0 times. 22:26:50 INFO 22:26:50 INFO --------------------- Get expected number of pods --------------------- 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-idrepo --output jsonpath={.items[*].spec.replicas} 22:26:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG 3 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO 22:26:50 INFO ---------------------------- Get pod list ---------------------------- 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-idrepo --output jsonpath={.items[*].metadata.name} 22:26:50 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO 22:26:50 INFO ------------------ Check pod ds-idrepo-0 is running ------------------ 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" 22:26:50 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG Running 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:26:50 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG true 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.startTime} 22:26:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG 2023-08-12T20:44:32Z 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO 22:26:50 INFO ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 --container ds -- ls / | grep "bin" 22:26:50 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO 22:26:50 INFO ----------------- Check pod ds-idrepo-0 restart count ----------------- 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.containerStatuses[*].restartCount} 22:26:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG 0 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO Pod ds-idrepo-0 has been restarted 0 times. 22:26:50 INFO 22:26:50 INFO ------------------ Check pod ds-idrepo-1 is running ------------------ 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" 22:26:50 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG Running 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:26:50 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG true 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.startTime} 22:26:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG 2023-08-12T20:55:43Z 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO 22:26:50 INFO ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 --container ds -- ls / | grep "bin" 22:26:50 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:50 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:50 INFO [loop_until]: OK (rc = 0) 22:26:50 DEBUG --- stdout --- 22:26:50 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:26:50 DEBUG --- stderr --- 22:26:50 DEBUG 22:26:50 INFO 22:26:50 INFO ----------------- Check pod ds-idrepo-1 restart count ----------------- 22:26:50 INFO 22:26:50 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.containerStatuses[*].restartCount} 22:26:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG 0 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO Pod ds-idrepo-1 has been restarted 0 times. 22:26:51 INFO 22:26:51 INFO ------------------ Check pod ds-idrepo-2 is running ------------------ 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" 22:26:51 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG Running 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:26:51 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG true 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.startTime} 22:26:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG 2023-08-12T21:06:30Z 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO 22:26:51 INFO ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 --container ds -- ls / | grep "bin" 22:26:51 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO 22:26:51 INFO ----------------- Check pod ds-idrepo-2 restart count ----------------- 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.containerStatuses[*].restartCount} 22:26:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG 0 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO Pod ds-idrepo-2 has been restarted 0 times. 22:26:51 INFO 22:26:51 INFO --------------------- Get expected number of pods --------------------- 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-cts --output jsonpath={.items[*].spec.replicas} 22:26:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG 3 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO 22:26:51 INFO ---------------------------- Get pod list ---------------------------- 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-cts --output jsonpath={.items[*].metadata.name} 22:26:51 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG ds-cts-0 ds-cts-1 ds-cts-2 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO 22:26:51 INFO -------------------- Check pod ds-cts-0 is running -------------------- 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" 22:26:51 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG Running 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:26:51 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG true 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.startTime} 22:26:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG 2023-08-12T20:44:32Z 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO 22:26:51 INFO ------------- Check pod ds-cts-0 filesystem is accessible ------------- 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-0 --container ds -- ls / | grep "bin" 22:26:51 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO 22:26:51 INFO ------------------ Check pod ds-cts-0 restart count ------------------ 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.containerStatuses[*].restartCount} 22:26:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG 0 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO Pod ds-cts-0 has been restarted 0 times. 22:26:51 INFO 22:26:51 INFO -------------------- Check pod ds-cts-1 is running -------------------- 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" 22:26:51 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG Running 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:26:51 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG true 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.startTime} 22:26:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:51 INFO [loop_until]: OK (rc = 0) 22:26:51 DEBUG --- stdout --- 22:26:51 DEBUG 2023-08-12T20:44:58Z 22:26:51 DEBUG --- stderr --- 22:26:51 DEBUG 22:26:51 INFO 22:26:51 INFO ------------- Check pod ds-cts-1 filesystem is accessible ------------- 22:26:51 INFO 22:26:51 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-1 --container ds -- ls / | grep "bin" 22:26:51 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:52 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:52 INFO [loop_until]: OK (rc = 0) 22:26:52 DEBUG --- stdout --- 22:26:52 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:26:52 DEBUG --- stderr --- 22:26:52 DEBUG 22:26:52 INFO 22:26:52 INFO ------------------ Check pod ds-cts-1 restart count ------------------ 22:26:52 INFO 22:26:52 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.containerStatuses[*].restartCount} 22:26:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:52 INFO [loop_until]: OK (rc = 0) 22:26:52 DEBUG --- stdout --- 22:26:52 DEBUG 0 22:26:52 DEBUG --- stderr --- 22:26:52 DEBUG 22:26:52 INFO Pod ds-cts-1 has been restarted 0 times. 22:26:52 INFO 22:26:52 INFO -------------------- Check pod ds-cts-2 is running -------------------- 22:26:52 INFO 22:26:52 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" 22:26:52 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:52 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:52 INFO [loop_until]: OK (rc = 0) 22:26:52 DEBUG --- stdout --- 22:26:52 DEBUG Running 22:26:52 DEBUG --- stderr --- 22:26:52 DEBUG 22:26:52 INFO 22:26:52 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:26:52 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:52 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:52 INFO [loop_until]: OK (rc = 0) 22:26:52 DEBUG --- stdout --- 22:26:52 DEBUG true 22:26:52 DEBUG --- stderr --- 22:26:52 DEBUG 22:26:52 INFO 22:26:52 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.startTime} 22:26:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:52 INFO [loop_until]: OK (rc = 0) 22:26:52 DEBUG --- stdout --- 22:26:52 DEBUG 2023-08-12T20:45:24Z 22:26:52 DEBUG --- stderr --- 22:26:52 DEBUG 22:26:52 INFO 22:26:52 INFO ------------- Check pod ds-cts-2 filesystem is accessible ------------- 22:26:52 INFO 22:26:52 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-2 --container ds -- ls / | grep "bin" 22:26:52 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:26:52 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:26:52 INFO [loop_until]: OK (rc = 0) 22:26:52 DEBUG --- stdout --- 22:26:52 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:26:52 DEBUG --- stderr --- 22:26:52 DEBUG 22:26:52 INFO 22:26:52 INFO ------------------ Check pod ds-cts-2 restart count ------------------ 22:26:52 INFO 22:26:52 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.containerStatuses[*].restartCount} 22:26:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:52 INFO [loop_until]: OK (rc = 0) 22:26:52 DEBUG --- stdout --- 22:26:52 DEBUG 0 22:26:52 DEBUG --- stderr --- 22:26:52 DEBUG 22:26:52 INFO Pod ds-cts-2 has been restarted 0 times. * Serving Flask app 'lodemon_run' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8080 * Running on http://10.106.45.86:8080 Press CTRL+C to quit 22:27:23 INFO 22:27:23 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:23 INFO [loop_until]: OK (rc = 0) 22:27:23 DEBUG --- stdout --- 22:27:23 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:23 DEBUG --- stderr --- 22:27:23 DEBUG 22:27:23 INFO 22:27:23 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:23 INFO [loop_until]: OK (rc = 0) 22:27:23 DEBUG --- stdout --- 22:27:23 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:23 DEBUG --- stderr --- 22:27:23 DEBUG 22:27:23 INFO 22:27:23 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:23 INFO [loop_until]: OK (rc = 0) 22:27:23 DEBUG --- stdout --- 22:27:23 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:23 DEBUG --- stderr --- 22:27:23 DEBUG 22:27:23 INFO 22:27:23 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:23 INFO [loop_until]: OK (rc = 0) 22:27:23 DEBUG --- stdout --- 22:27:23 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:23 DEBUG --- stderr --- 22:27:23 DEBUG 22:27:23 INFO 22:27:23 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:23 INFO [loop_until]: OK (rc = 0) 22:27:23 DEBUG --- stdout --- 22:27:23 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:23 DEBUG --- stderr --- 22:27:23 DEBUG 22:27:24 INFO 22:27:24 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:24 INFO [loop_until]: OK (rc = 0) 22:27:24 DEBUG --- stdout --- 22:27:24 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:24 DEBUG --- stderr --- 22:27:24 DEBUG 22:27:24 INFO 22:27:24 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:24 INFO [loop_until]: OK (rc = 0) 22:27:24 DEBUG --- stdout --- 22:27:24 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:24 DEBUG --- stderr --- 22:27:24 DEBUG 22:27:24 INFO 22:27:24 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:24 INFO [loop_until]: OK (rc = 0) 22:27:24 DEBUG --- stdout --- 22:27:24 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:24 DEBUG --- stderr --- 22:27:24 DEBUG 22:27:24 INFO 22:27:24 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:24 INFO [loop_until]: OK (rc = 0) 22:27:24 DEBUG --- stdout --- 22:27:24 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:24 DEBUG --- stderr --- 22:27:24 DEBUG 22:27:24 INFO 22:27:24 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:24 INFO [loop_until]: OK (rc = 0) 22:27:24 DEBUG --- stdout --- 22:27:24 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:24 DEBUG --- stderr --- 22:27:24 DEBUG 22:27:24 INFO 22:27:24 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:24 INFO [loop_until]: OK (rc = 0) 22:27:24 DEBUG --- stdout --- 22:27:24 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:24 DEBUG --- stderr --- 22:27:24 DEBUG 22:27:24 INFO 22:27:24 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:24 INFO [loop_until]: OK (rc = 0) 22:27:24 DEBUG --- stdout --- 22:27:24 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:24 DEBUG --- stderr --- 22:27:24 DEBUG 22:27:24 INFO 22:27:24 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:25 INFO [loop_until]: OK (rc = 0) 22:27:25 DEBUG --- stdout --- 22:27:25 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:25 DEBUG --- stderr --- 22:27:25 DEBUG 22:27:25 INFO 22:27:25 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:25 INFO [loop_until]: OK (rc = 0) 22:27:25 DEBUG --- stdout --- 22:27:25 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:25 DEBUG --- stderr --- 22:27:25 DEBUG 22:27:25 INFO 22:27:25 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:25 INFO [loop_until]: OK (rc = 0) 22:27:25 DEBUG --- stdout --- 22:27:25 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:25 DEBUG --- stderr --- 22:27:25 DEBUG 22:27:25 INFO 22:27:25 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:25 INFO [loop_until]: OK (rc = 0) 22:27:25 DEBUG --- stdout --- 22:27:25 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:25 DEBUG --- stderr --- 22:27:25 DEBUG 22:27:25 INFO 22:27:25 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:25 INFO [loop_until]: OK (rc = 0) 22:27:25 DEBUG --- stdout --- 22:27:25 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:25 DEBUG --- stderr --- 22:27:25 DEBUG 22:27:25 INFO 22:27:25 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:25 INFO [loop_until]: OK (rc = 0) 22:27:25 DEBUG --- stdout --- 22:27:25 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:25 DEBUG --- stderr --- 22:27:25 DEBUG 22:27:25 INFO 22:27:25 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:25 INFO [loop_until]: OK (rc = 0) 22:27:25 DEBUG --- stdout --- 22:27:25 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:25 DEBUG --- stderr --- 22:27:25 DEBUG 22:27:25 INFO 22:27:25 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:25 INFO [loop_until]: OK (rc = 0) 22:27:25 DEBUG --- stdout --- 22:27:25 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:25 DEBUG --- stderr --- 22:27:25 DEBUG 22:27:25 INFO 22:27:25 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:26 INFO [loop_until]: OK (rc = 0) 22:27:26 DEBUG --- stdout --- 22:27:26 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:26 DEBUG --- stderr --- 22:27:26 DEBUG 22:27:26 INFO 22:27:26 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:26 INFO [loop_until]: OK (rc = 0) 22:27:26 DEBUG --- stdout --- 22:27:26 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:26 DEBUG --- stderr --- 22:27:26 DEBUG 22:27:26 INFO 22:27:26 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:26 INFO [loop_until]: OK (rc = 0) 22:27:26 DEBUG --- stdout --- 22:27:26 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:26 DEBUG --- stderr --- 22:27:26 DEBUG 22:27:26 INFO 22:27:26 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:26 INFO [loop_until]: OK (rc = 0) 22:27:26 DEBUG --- stdout --- 22:27:26 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:26 DEBUG --- stderr --- 22:27:26 DEBUG 22:27:26 INFO 22:27:26 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:26 INFO [loop_until]: OK (rc = 0) 22:27:26 DEBUG --- stdout --- 22:27:26 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:26 DEBUG --- stderr --- 22:27:26 DEBUG 22:27:26 INFO 22:27:26 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:26 INFO [loop_until]: OK (rc = 0) 22:27:26 DEBUG --- stdout --- 22:27:26 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:26 DEBUG --- stderr --- 22:27:26 DEBUG 22:27:26 INFO 22:27:26 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:27:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:26 INFO [loop_until]: OK (rc = 0) 22:27:26 DEBUG --- stdout --- 22:27:26 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:27:26 DEBUG --- stderr --- 22:27:26 DEBUG 22:27:26 INFO Initializing monitoring instance threads 22:27:26 DEBUG Monitoring instance thread list: [, , , , , , , , , , , , , , , , , , , , , , , , , , , , ] 22:27:26 INFO Starting instance threads 22:27:26 INFO 22:27:26 INFO Thread started 22:27:26 INFO [loop_until]: kubectl --namespace=xlou top node 22:27:26 INFO 22:27:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:26 INFO Thread started 22:27:26 INFO [loop_until]: kubectl --namespace=xlou top pods 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646" 22:27:26 INFO Thread started Exception in thread Thread-23: 22:27:26 INFO Thread started Traceback (most recent call last): 22:27:26 INFO Thread started Exception in thread Thread-24: File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner Traceback (most recent call last): 22:27:26 INFO Thread started Exception in thread Thread-25: 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691875646" 22:27:26 INFO Thread started File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner Traceback (most recent call last): 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO Thread started self.run() 22:27:26 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646" 22:27:26 INFO Thread started 22:27:26 INFO All threads has been started File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 22:27:26 INFO [loop_until]: OK (rc = 0) self.run() 22:27:26 DEBUG --- stdout --- File "/usr/local/lib/python3.9/threading.py", line 910, in run 22:27:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 14m 3298Mi am-55f77847b7-79tz5 14m 2499Mi am-55f77847b7-c4982 9m 2250Mi ds-cts-0 8m 374Mi ds-cts-1 8m 359Mi ds-cts-2 9m 365Mi ds-idrepo-0 18m 10315Mi ds-idrepo-1 16m 10259Mi ds-idrepo-2 27m 10265Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 11m 3392Mi idm-65858d8c4c-wd2fd 8m 1278Mi lodemon-755c6d9977-9wwrg 258m 60Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1m 14Mi 127.0.0.1 - - [12/Aug/2023 22:27:26] "GET /monitoring/start HTTP/1.1" 200 - 22:27:26 DEBUG --- stderr --- 22:27:26 DEBUG Exception in thread Thread-28: File "/usr/local/lib/python3.9/threading.py", line 910, in run Traceback (most recent call last): self.run() File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self.run() self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.9/threading.py", line 910, in run instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) if self.prom_data['functions']: File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run instance.run() KeyError: 'functions' File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run instance.run() if self.prom_data['functions']: File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run KeyError: 'functions' if self.prom_data['functions']: if self.prom_data['functions']: KeyError: 'functions' KeyError: 'functions' 22:27:26 INFO [loop_until]: OK (rc = 0) 22:27:26 DEBUG --- stdout --- 22:27:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 105m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 4343Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 3399Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 74m 0% 3672Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 4718Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2116Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2541Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 10985Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 82m 0% 10914Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 10906Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1631Mi 2% 22:27:26 DEBUG --- stderr --- 22:27:26 DEBUG 22:27:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:27 WARNING Response is NONE 22:27:27 DEBUG Exception is preset. Setting retry_loop to true 22:27:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:29 WARNING Response is NONE 22:27:29 WARNING Response is NONE 22:27:29 DEBUG Exception is preset. Setting retry_loop to true 22:27:29 DEBUG Exception is preset. Setting retry_loop to true 22:27:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:33 WARNING Response is NONE 22:27:33 WARNING Response is NONE 22:27:33 WARNING Response is NONE 22:27:33 WARNING Response is NONE 22:27:33 DEBUG Exception is preset. Setting retry_loop to true 22:27:33 DEBUG Exception is preset. Setting retry_loop to true 22:27:33 DEBUG Exception is preset. Setting retry_loop to true 22:27:33 DEBUG Exception is preset. Setting retry_loop to true 22:27:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:38 WARNING Response is NONE 22:27:38 DEBUG Exception is preset. Setting retry_loop to true 22:27:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:41 WARNING Response is NONE 22:27:41 WARNING Response is NONE 22:27:41 DEBUG Exception is preset. Setting retry_loop to true 22:27:41 DEBUG Exception is preset. Setting retry_loop to true 22:27:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:42 WARNING Response is NONE 22:27:42 WARNING Response is NONE 22:27:42 DEBUG Exception is preset. Setting retry_loop to true 22:27:42 DEBUG Exception is preset. Setting retry_loop to true 22:27:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:45 WARNING Response is NONE 22:27:45 DEBUG Exception is preset. Setting retry_loop to true 22:27:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:47 WARNING Response is NONE 22:27:47 WARNING Response is NONE 22:27:47 DEBUG Exception is preset. Setting retry_loop to true 22:27:47 DEBUG Exception is preset. Setting retry_loop to true 22:27:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:50 WARNING Response is NONE 22:27:50 DEBUG Exception is preset. Setting retry_loop to true 22:27:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:52 WARNING Response is NONE 22:27:52 DEBUG Exception is preset. Setting retry_loop to true 22:27:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:53 WARNING Response is NONE 22:27:53 DEBUG Exception is preset. Setting retry_loop to true 22:27:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:54 WARNING Response is NONE 22:27:54 DEBUG Exception is preset. Setting retry_loop to true 22:27:54 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:56 WARNING Response is NONE 22:27:56 DEBUG Exception is preset. Setting retry_loop to true 22:27:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:58 WARNING Response is NONE 22:27:58 DEBUG Exception is preset. Setting retry_loop to true 22:27:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:27:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:27:59 WARNING Response is NONE 22:27:59 DEBUG Exception is preset. Setting retry_loop to true 22:27:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:01 WARNING Response is NONE 22:28:01 DEBUG Exception is preset. Setting retry_loop to true 22:28:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:03 WARNING Response is NONE 22:28:03 DEBUG Exception is preset. Setting retry_loop to true 22:28:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:05 WARNING Response is NONE 22:28:05 DEBUG Exception is preset. Setting retry_loop to true 22:28:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:07 WARNING Response is NONE 22:28:07 DEBUG Exception is preset. Setting retry_loop to true 22:28:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:09 WARNING Response is NONE 22:28:09 DEBUG Exception is preset. Setting retry_loop to true 22:28:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:10 WARNING Response is NONE 22:28:10 DEBUG Exception is preset. Setting retry_loop to true 22:28:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:12 WARNING Response is NONE 22:28:12 DEBUG Exception is preset. Setting retry_loop to true 22:28:12 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:14 WARNING Response is NONE 22:28:14 DEBUG Exception is preset. Setting retry_loop to true 22:28:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:15 WARNING Response is NONE 22:28:15 DEBUG Exception is preset. Setting retry_loop to true 22:28:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:18 WARNING Response is NONE 22:28:18 DEBUG Exception is preset. Setting retry_loop to true 22:28:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:20 WARNING Response is NONE 22:28:20 DEBUG Exception is preset. Setting retry_loop to true 22:28:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:21 WARNING Response is NONE 22:28:21 DEBUG Exception is preset. Setting retry_loop to true 22:28:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:23 WARNING Response is NONE 22:28:23 DEBUG Exception is preset. Setting retry_loop to true 22:28:23 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-5: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:28:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:25 WARNING Response is NONE 22:28:25 DEBUG Exception is preset. Setting retry_loop to true 22:28:25 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-4: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:28:27 INFO 22:28:27 INFO [loop_until]: kubectl --namespace=xlou top pods 22:28:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:28:27 INFO 22:28:27 INFO [loop_until]: kubectl --namespace=xlou top node 22:28:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:28:27 WARNING Response is NONE 22:28:27 DEBUG Exception is preset. Setting retry_loop to true 22:28:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:27 INFO [loop_until]: OK (rc = 0) 22:28:27 DEBUG --- stdout --- 22:28:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 10m 3299Mi am-55f77847b7-79tz5 17m 2501Mi am-55f77847b7-c4982 7m 2250Mi ds-cts-0 6m 378Mi ds-cts-1 7m 362Mi ds-cts-2 74m 367Mi ds-idrepo-0 480m 10320Mi ds-idrepo-1 31m 10265Mi ds-idrepo-2 42m 10277Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 10m 3393Mi idm-65858d8c4c-wd2fd 17m 1315Mi lodemon-755c6d9977-9wwrg 3m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 137m 140Mi 22:28:27 DEBUG --- stderr --- 22:28:27 DEBUG 22:28:27 INFO [loop_until]: OK (rc = 0) 22:28:27 DEBUG --- stdout --- 22:28:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 4340Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3395Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 3670Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 4719Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2116Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 87m 0% 2595Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 68m 0% 10992Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 10928Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 10909Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 232m 1% 1737Mi 2% 22:28:27 DEBUG --- stderr --- 22:28:27 DEBUG 22:28:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:29 WARNING Response is NONE 22:28:29 DEBUG Exception is preset. Setting retry_loop to true 22:28:29 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-6: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:28:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:31 WARNING Response is NONE 22:28:31 DEBUG Exception is preset. Setting retry_loop to true 22:28:31 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-9: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:28:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:32 WARNING Response is NONE 22:28:32 DEBUG Exception is preset. Setting retry_loop to true 22:28:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:38 WARNING Response is NONE 22:28:38 DEBUG Exception is preset. Setting retry_loop to true 22:28:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:41 WARNING Response is NONE 22:28:41 DEBUG Exception is preset. Setting retry_loop to true 22:28:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:43 WARNING Response is NONE 22:28:43 DEBUG Exception is preset. Setting retry_loop to true 22:28:43 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-10: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:28:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:47 WARNING Response is NONE 22:28:47 DEBUG Exception is preset. Setting retry_loop to true 22:28:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:49 WARNING Response is NONE 22:28:49 DEBUG Exception is preset. Setting retry_loop to true 22:28:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:52 WARNING Response is NONE 22:28:52 DEBUG Exception is preset. Setting retry_loop to true 22:28:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:28:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:28:58 WARNING Response is NONE 22:28:58 DEBUG Exception is preset. Setting retry_loop to true 22:28:58 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-3: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:29:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:29:00 WARNING Response is NONE 22:29:00 DEBUG Exception is preset. Setting retry_loop to true 22:29:00 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-8: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:29:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:29:01 WARNING Response is NONE 22:29:01 DEBUG Exception is preset. Setting retry_loop to true 22:29:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:29:03 WARNING Response is NONE 22:29:03 DEBUG Exception is preset. Setting retry_loop to true 22:29:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:29:12 WARNING Response is NONE 22:29:12 DEBUG Exception is preset. Setting retry_loop to true 22:29:12 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:29:14 WARNING Response is NONE 22:29:14 DEBUG Exception is preset. Setting retry_loop to true 22:29:14 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-21: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:29:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:29:23 WARNING Response is NONE 22:29:23 DEBUG Exception is preset. Setting retry_loop to true 22:29:23 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:27 INFO 22:29:27 INFO 22:29:27 INFO [loop_until]: kubectl --namespace=xlou top pods 22:29:27 INFO [loop_until]: kubectl --namespace=xlou top node 22:29:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:29:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:29:27 INFO [loop_until]: OK (rc = 0) 22:29:27 DEBUG --- stdout --- 22:29:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 8m 3299Mi am-55f77847b7-79tz5 12m 2502Mi am-55f77847b7-c4982 6m 2251Mi ds-cts-0 9m 379Mi ds-cts-1 7m 362Mi ds-cts-2 10m 368Mi ds-idrepo-0 16m 10320Mi ds-idrepo-1 14m 10266Mi ds-idrepo-2 21m 10281Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 10m 3393Mi idm-65858d8c4c-wd2fd 8m 1314Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1m 48Mi 22:29:27 DEBUG --- stderr --- 22:29:27 DEBUG 22:29:27 INFO [loop_until]: OK (rc = 0) 22:29:27 DEBUG --- stdout --- 22:29:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 4342Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 3397Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 3672Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 4720Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2117Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2584Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 67m 0% 10992Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 72m 0% 10928Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 10910Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1629Mi 2% 22:29:27 DEBUG --- stderr --- 22:29:27 DEBUG 22:29:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:29:34 WARNING Response is NONE 22:29:34 DEBUG Exception is preset. Setting retry_loop to true 22:29:34 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-7: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:29:38 WARNING Response is NONE 22:29:38 WARNING Response is NONE 22:29:38 WARNING Response is NONE 22:29:38 WARNING Response is NONE 22:29:38 WARNING Response is NONE 22:29:38 WARNING Response is NONE 22:29:38 WARNING Response is NONE 22:29:38 WARNING Response is NONE 22:29:38 WARNING Response is NONE 22:29:38 WARNING Response is NONE 22:29:38 WARNING Response is NONE 22:29:38 WARNING Response is NONE 22:29:38 WARNING Response is NONE 22:29:38 WARNING Response is NONE 22:29:38 DEBUG Exception is preset. Setting retry_loop to true 22:29:38 DEBUG Exception is preset. Setting retry_loop to true 22:29:38 DEBUG Exception is preset. Setting retry_loop to true 22:29:38 DEBUG Exception is preset. Setting retry_loop to true 22:29:38 DEBUG Exception is preset. Setting retry_loop to true 22:29:38 DEBUG Exception is preset. Setting retry_loop to true 22:29:38 DEBUG Exception is preset. Setting retry_loop to true 22:29:38 DEBUG Exception is preset. Setting retry_loop to true 22:29:38 DEBUG Exception is preset. Setting retry_loop to true 22:29:38 DEBUG Exception is preset. Setting retry_loop to true 22:29:38 DEBUG Exception is preset. Setting retry_loop to true 22:29:38 DEBUG Exception is preset. Setting retry_loop to true 22:29:38 DEBUG Exception is preset. Setting retry_loop to true 22:29:38 DEBUG Exception is preset. Setting retry_loop to true 22:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:29:49 WARNING Response is NONE 22:29:49 DEBUG Exception is preset. Setting retry_loop to true 22:29:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:29:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:29:51 WARNING Response is NONE 22:29:51 WARNING Response is NONE 22:29:51 DEBUG Exception is preset. Setting retry_loop to true 22:29:51 DEBUG Exception is preset. Setting retry_loop to true 22:29:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:29:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:29:55 WARNING Response is NONE 22:29:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:29:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:29:55 WARNING Response is NONE 22:29:55 DEBUG Exception is preset. Setting retry_loop to true 22:29:55 WARNING Response is NONE 22:29:55 WARNING Response is NONE 22:29:55 DEBUG Exception is preset. Setting retry_loop to true 22:29:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:55 DEBUG Exception is preset. Setting retry_loop to true 22:29:55 DEBUG Exception is preset. Setting retry_loop to true 22:29:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:29:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:00 WARNING Response is NONE 22:30:00 DEBUG Exception is preset. Setting retry_loop to true 22:30:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:02 WARNING Response is NONE 22:30:02 WARNING Response is NONE 22:30:02 DEBUG Exception is preset. Setting retry_loop to true 22:30:02 DEBUG Exception is preset. Setting retry_loop to true 22:30:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:03 WARNING Response is NONE 22:30:03 WARNING Response is NONE 22:30:03 DEBUG Exception is preset. Setting retry_loop to true 22:30:03 DEBUG Exception is preset. Setting retry_loop to true 22:30:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:06 WARNING Response is NONE 22:30:06 DEBUG Exception is preset. Setting retry_loop to true 22:30:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:08 WARNING Response is NONE 22:30:08 WARNING Response is NONE 22:30:08 WARNING Response is NONE 22:30:08 DEBUG Exception is preset. Setting retry_loop to true 22:30:08 DEBUG Exception is preset. Setting retry_loop to true 22:30:08 DEBUG Exception is preset. Setting retry_loop to true 22:30:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:11 WARNING Response is NONE 22:30:11 DEBUG Exception is preset. Setting retry_loop to true 22:30:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:13 WARNING Response is NONE 22:30:13 WARNING Response is NONE 22:30:13 DEBUG Exception is preset. Setting retry_loop to true 22:30:13 DEBUG Exception is preset. Setting retry_loop to true 22:30:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:15 WARNING Response is NONE 22:30:15 DEBUG Exception is preset. Setting retry_loop to true 22:30:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:17 WARNING Response is NONE 22:30:17 DEBUG Exception is preset. Setting retry_loop to true 22:30:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:17 WARNING Response is NONE 22:30:17 DEBUG Exception is preset. Setting retry_loop to true 22:30:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:19 WARNING Response is NONE 22:30:19 DEBUG Exception is preset. Setting retry_loop to true 22:30:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:21 WARNING Response is NONE 22:30:21 DEBUG Exception is preset. Setting retry_loop to true 22:30:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:22 WARNING Response is NONE 22:30:22 DEBUG Exception is preset. Setting retry_loop to true 22:30:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:24 WARNING Response is NONE 22:30:24 DEBUG Exception is preset. Setting retry_loop to true 22:30:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:25 WARNING Response is NONE 22:30:25 DEBUG Exception is preset. Setting retry_loop to true 22:30:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:26 WARNING Response is NONE 22:30:26 DEBUG Exception is preset. Setting retry_loop to true 22:30:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:27 INFO 22:30:27 INFO [loop_until]: kubectl --namespace=xlou top node 22:30:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:30:27 INFO 22:30:27 INFO [loop_until]: kubectl --namespace=xlou top pods 22:30:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:30:27 INFO [loop_until]: OK (rc = 0) 22:30:27 DEBUG --- stdout --- 22:30:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 4343Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3393Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3676Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4715Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2583Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 68m 0% 10993Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 10933Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 10906Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 378m 2% 1633Mi 2% 22:30:27 DEBUG --- stderr --- 22:30:27 DEBUG 22:30:27 INFO [loop_until]: OK (rc = 0) 22:30:27 DEBUG --- stdout --- 22:30:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 15m 3299Mi am-55f77847b7-79tz5 11m 2502Mi am-55f77847b7-c4982 8m 2251Mi ds-cts-0 12m 378Mi ds-cts-1 10m 362Mi ds-cts-2 7m 367Mi ds-idrepo-0 23m 10322Mi ds-idrepo-1 28m 10263Mi ds-idrepo-2 31m 10285Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 8m 3393Mi idm-65858d8c4c-wd2fd 6m 1314Mi lodemon-755c6d9977-9wwrg 3m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 323m 98Mi 22:30:27 DEBUG --- stderr --- 22:30:27 DEBUG 22:30:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:28 WARNING Response is NONE 22:30:28 DEBUG Exception is preset. Setting retry_loop to true 22:30:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:30 WARNING Response is NONE 22:30:30 DEBUG Exception is preset. Setting retry_loop to true 22:30:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:33 WARNING Response is NONE 22:30:33 DEBUG Exception is preset. Setting retry_loop to true 22:30:33 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-13: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:30:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:35 WARNING Response is NONE 22:30:35 DEBUG Exception is preset. Setting retry_loop to true 22:30:35 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-11: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:30:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:37 WARNING Response is NONE 22:30:37 DEBUG Exception is preset. Setting retry_loop to true 22:30:37 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-14: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:30:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:39 WARNING Response is NONE 22:30:39 DEBUG Exception is preset. Setting retry_loop to true 22:30:39 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-15: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:30:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:41 WARNING Response is NONE 22:30:41 DEBUG Exception is preset. Setting retry_loop to true 22:30:41 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-26: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:30:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:42 WARNING Response is NONE 22:30:42 DEBUG Exception is preset. Setting retry_loop to true 22:30:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:46 WARNING Response is NONE 22:30:46 DEBUG Exception is preset. Setting retry_loop to true 22:30:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:52 WARNING Response is NONE 22:30:52 WARNING Response is NONE 22:30:52 WARNING Response is NONE 22:30:52 WARNING Response is NONE 22:30:52 WARNING Response is NONE 22:30:52 DEBUG Exception is preset. Setting retry_loop to true 22:30:52 DEBUG Exception is preset. Setting retry_loop to true 22:30:52 DEBUG Exception is preset. Setting retry_loop to true 22:30:52 DEBUG Exception is preset. Setting retry_loop to true 22:30:52 DEBUG Exception is preset. Setting retry_loop to true 22:30:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:54 WARNING Response is NONE 22:30:54 DEBUG Exception is preset. Setting retry_loop to true 22:30:54 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-12: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:30:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:56 WARNING Response is NONE 22:30:56 DEBUG Exception is preset. Setting retry_loop to true 22:30:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:30:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:58 WARNING Response is NONE 22:30:58 DEBUG Exception is preset. Setting retry_loop to true 22:30:58 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-29: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:30:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:30:58 WARNING Response is NONE 22:30:58 DEBUG Exception is preset. Setting retry_loop to true 22:30:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:03 WARNING Response is NONE 22:31:03 DEBUG Exception is preset. Setting retry_loop to true 22:31:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:05 WARNING Response is NONE 22:31:05 WARNING Response is NONE 22:31:05 DEBUG Exception is preset. Setting retry_loop to true 22:31:05 DEBUG Exception is preset. Setting retry_loop to true 22:31:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:07 WARNING Response is NONE 22:31:07 DEBUG Exception is preset. Setting retry_loop to true 22:31:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:09 WARNING Response is NONE 22:31:09 DEBUG Exception is preset. Setting retry_loop to true 22:31:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:11 WARNING Response is NONE 22:31:11 DEBUG Exception is preset. Setting retry_loop to true 22:31:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:14 WARNING Response is NONE 22:31:14 DEBUG Exception is preset. Setting retry_loop to true 22:31:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:16 WARNING Response is NONE 22:31:16 WARNING Response is NONE 22:31:16 DEBUG Exception is preset. Setting retry_loop to true 22:31:16 DEBUG Exception is preset. Setting retry_loop to true 22:31:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:18 WARNING Response is NONE 22:31:18 DEBUG Exception is preset. Setting retry_loop to true 22:31:18 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-17: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:31:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:20 WARNING Response is NONE 22:31:20 DEBUG Exception is preset. Setting retry_loop to true 22:31:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:22 WARNING Response is NONE 22:31:22 DEBUG Exception is preset. Setting retry_loop to true 22:31:22 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-16: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:31:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:25 WARNING Response is NONE 22:31:25 DEBUG Exception is preset. Setting retry_loop to true 22:31:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:27 INFO 22:31:27 INFO [loop_until]: kubectl --namespace=xlou top node 22:31:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:31:27 INFO 22:31:27 INFO [loop_until]: kubectl --namespace=xlou top pods 22:31:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:31:27 INFO [loop_until]: OK (rc = 0) 22:31:27 DEBUG --- stdout --- 22:31:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 10m 3302Mi am-55f77847b7-79tz5 10m 2503Mi am-55f77847b7-c4982 13m 2255Mi ds-cts-0 9m 379Mi ds-cts-1 6m 363Mi ds-cts-2 8m 368Mi ds-idrepo-0 27m 10322Mi ds-idrepo-1 25m 10263Mi ds-idrepo-2 16m 10286Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 8m 3393Mi idm-65858d8c4c-wd2fd 7m 1314Mi lodemon-755c6d9977-9wwrg 3m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1m 98Mi 22:31:27 DEBUG --- stderr --- 22:31:27 DEBUG 22:31:27 INFO [loop_until]: OK (rc = 0) 22:31:27 DEBUG --- stdout --- 22:31:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 4346Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3401Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3676Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 4721Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2124Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2587Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 73m 0% 10996Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 69m 0% 10933Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 76m 0% 10908Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1632Mi 2% 22:31:27 DEBUG --- stderr --- 22:31:27 DEBUG 22:31:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:27 WARNING Response is NONE 22:31:27 DEBUG Exception is preset. Setting retry_loop to true 22:31:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:29 WARNING Response is NONE 22:31:29 DEBUG Exception is preset. Setting retry_loop to true 22:31:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:31 WARNING Response is NONE 22:31:31 DEBUG Exception is preset. Setting retry_loop to true 22:31:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:34 WARNING Response is NONE 22:31:34 DEBUG Exception is preset. Setting retry_loop to true 22:31:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:36 WARNING Response is NONE 22:31:36 DEBUG Exception is preset. Setting retry_loop to true 22:31:36 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-19: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:31:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:38 WARNING Response is NONE 22:31:38 DEBUG Exception is preset. Setting retry_loop to true 22:31:38 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-18: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:31:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:41 WARNING Response is NONE 22:31:41 DEBUG Exception is preset. Setting retry_loop to true 22:31:41 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-20: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:31:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:42 WARNING Response is NONE 22:31:42 DEBUG Exception is preset. Setting retry_loop to true 22:31:42 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-22: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:31:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:45 WARNING Response is NONE 22:31:45 DEBUG Exception is preset. Setting retry_loop to true 22:31:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:31:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:31:56 WARNING Response is NONE 22:31:56 DEBUG Exception is preset. Setting retry_loop to true 22:31:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:32:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691875646 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:32:07 WARNING Response is NONE 22:32:07 DEBUG Exception is preset. Setting retry_loop to true 22:32:07 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-27: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:32:27 INFO 22:32:27 INFO [loop_until]: kubectl --namespace=xlou top pods 22:32:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:32:27 INFO 22:32:27 INFO [loop_until]: kubectl --namespace=xlou top node 22:32:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:32:27 INFO [loop_until]: OK (rc = 0) 22:32:27 DEBUG --- stdout --- 22:32:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 93m 3323Mi am-55f77847b7-79tz5 79m 2534Mi am-55f77847b7-c4982 142m 2347Mi ds-cts-0 71m 380Mi ds-cts-1 76m 364Mi ds-cts-2 88m 368Mi ds-idrepo-0 1426m 10765Mi ds-idrepo-1 220m 10265Mi ds-idrepo-2 205m 10289Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 33m 3409Mi idm-65858d8c4c-wd2fd 62m 1353Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 3m 98Mi 22:32:27 DEBUG --- stderr --- 22:32:27 DEBUG 22:32:27 INFO [loop_until]: OK (rc = 0) 22:32:27 DEBUG --- stdout --- 22:32:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 4370Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 210m 1% 3490Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 193m 1% 3804Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 104m 0% 4734Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2125Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 127m 0% 2622Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 128m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 144m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1474m 9% 11691Mi 19% gke-xlou-cdm-ds-32e4dcb1-b374 296m 1% 10934Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 201m 1% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 296m 1% 10911Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 497m 3% 1876Mi 3% 22:32:27 DEBUG --- stderr --- 22:32:27 DEBUG 22:33:27 INFO 22:33:27 INFO [loop_until]: kubectl --namespace=xlou top node 22:33:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:33:27 INFO 22:33:27 INFO [loop_until]: kubectl --namespace=xlou top pods 22:33:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:33:27 INFO [loop_until]: OK (rc = 0) 22:33:27 DEBUG --- stdout --- 22:33:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 11m 3324Mi am-55f77847b7-79tz5 16m 2637Mi am-55f77847b7-c4982 19m 2347Mi ds-cts-0 103m 377Mi ds-cts-1 87m 365Mi ds-cts-2 84m 370Mi ds-idrepo-0 2839m 13360Mi ds-idrepo-1 16m 10270Mi ds-idrepo-2 20m 10290Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 12m 3409Mi idm-65858d8c4c-wd2fd 8m 1353Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1123m 373Mi 22:33:27 DEBUG --- stderr --- 22:33:27 DEBUG 22:33:27 INFO [loop_until]: OK (rc = 0) 22:33:27 DEBUG --- stdout --- 22:33:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 4366Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 3491Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 3809Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4736Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2122Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2619Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 277m 1% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2968m 18% 13948Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 10936Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 142m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 10912Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1185m 7% 1903Mi 3% 22:33:27 DEBUG --- stderr --- 22:33:27 DEBUG 22:34:27 INFO 22:34:27 INFO [loop_until]: kubectl --namespace=xlou top node 22:34:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:34:27 INFO 22:34:27 INFO [loop_until]: kubectl --namespace=xlou top pods 22:34:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:34:27 INFO [loop_until]: OK (rc = 0) 22:34:27 DEBUG --- stdout --- 22:34:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 76m 0% 4368Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 82m 0% 3504Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 3816Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 4735Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2125Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2622Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2943m 18% 13942Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 76m 0% 10937Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 10917Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1250m 7% 1904Mi 3% 22:34:27 DEBUG --- stderr --- 22:34:27 DEBUG 22:34:27 INFO [loop_until]: OK (rc = 0) 22:34:27 DEBUG --- stdout --- 22:34:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 19m 3324Mi am-55f77847b7-79tz5 13m 2647Mi am-55f77847b7-c4982 24m 2359Mi ds-cts-0 9m 377Mi ds-cts-1 7m 365Mi ds-cts-2 10m 370Mi ds-idrepo-0 2841m 13405Mi ds-idrepo-1 19m 10274Mi ds-idrepo-2 21m 10292Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 12m 3410Mi idm-65858d8c4c-wd2fd 8m 1353Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1163m 373Mi 22:34:27 DEBUG --- stderr --- 22:34:27 DEBUG 22:35:27 INFO 22:35:27 INFO [loop_until]: kubectl --namespace=xlou top node 22:35:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:35:27 INFO 22:35:27 INFO [loop_until]: kubectl --namespace=xlou top pods 22:35:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:35:27 INFO [loop_until]: OK (rc = 0) 22:35:27 DEBUG --- stdout --- 22:35:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 4367Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 86m 0% 3524Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 3828Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 81m 0% 4735Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2125Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2618Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2825m 17% 14048Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 10944Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 69m 0% 10916Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1314m 8% 1903Mi 3% 22:35:27 DEBUG --- stderr --- 22:35:27 DEBUG 22:35:27 INFO [loop_until]: OK (rc = 0) 22:35:27 DEBUG --- stdout --- 22:35:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 11m 3324Mi am-55f77847b7-79tz5 12m 2656Mi am-55f77847b7-c4982 22m 2382Mi ds-cts-0 6m 379Mi ds-cts-1 6m 365Mi ds-cts-2 7m 370Mi ds-idrepo-0 2876m 13465Mi ds-idrepo-1 18m 10275Mi ds-idrepo-2 20m 10294Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 15m 3415Mi idm-65858d8c4c-wd2fd 9m 1350Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1240m 375Mi 22:35:27 DEBUG --- stderr --- 22:35:27 DEBUG 22:36:27 INFO 22:36:27 INFO [loop_until]: kubectl --namespace=xlou top node 22:36:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:36:27 INFO 22:36:27 INFO [loop_until]: kubectl --namespace=xlou top pods 22:36:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:36:27 INFO [loop_until]: OK (rc = 0) 22:36:27 DEBUG --- stdout --- 22:36:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 11m 3327Mi am-55f77847b7-79tz5 11m 2666Mi am-55f77847b7-c4982 11m 2382Mi ds-cts-0 6m 379Mi ds-cts-1 6m 365Mi ds-cts-2 7m 371Mi ds-idrepo-0 2921m 13547Mi ds-idrepo-1 24m 10277Mi ds-idrepo-2 20m 10298Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 9m 3408Mi idm-65858d8c4c-wd2fd 7m 1350Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1277m 375Mi 22:36:27 DEBUG --- stderr --- 22:36:27 DEBUG 22:36:27 INFO [loop_until]: OK (rc = 0) 22:36:27 DEBUG --- stdout --- 22:36:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 4369Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 3535Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3838Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4731Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2127Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2619Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2995m 18% 14125Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 71m 0% 10945Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 10924Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1328m 8% 1904Mi 3% 22:36:27 DEBUG --- stderr --- 22:36:27 DEBUG 22:37:28 INFO 22:37:28 INFO [loop_until]: kubectl --namespace=xlou top pods 22:37:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:37:28 INFO 22:37:28 INFO [loop_until]: kubectl --namespace=xlou top node 22:37:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:37:28 INFO [loop_until]: OK (rc = 0) 22:37:28 DEBUG --- stdout --- 22:37:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 21m 3328Mi am-55f77847b7-79tz5 10m 2675Mi am-55f77847b7-c4982 23m 2395Mi ds-cts-0 6m 379Mi ds-cts-1 6m 365Mi ds-cts-2 6m 370Mi ds-idrepo-0 2990m 13546Mi ds-idrepo-1 14m 10278Mi ds-idrepo-2 18m 10300Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 13m 3410Mi idm-65858d8c4c-wd2fd 14m 1347Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1348m 375Mi 22:37:28 DEBUG --- stderr --- 22:37:28 DEBUG 22:37:28 INFO [loop_until]: OK (rc = 0) 22:37:28 DEBUG --- stdout --- 22:37:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 80m 0% 4372Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 3536Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 3846Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 4736Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 2616Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2146m 13% 14125Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 71m 0% 10949Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 10924Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1316m 8% 1631Mi 2% 22:37:28 DEBUG --- stderr --- 22:37:28 DEBUG 22:38:28 INFO 22:38:28 INFO [loop_until]: kubectl --namespace=xlou top pods 22:38:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:38:28 INFO 22:38:28 INFO [loop_until]: kubectl --namespace=xlou top node 22:38:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:38:28 INFO [loop_until]: OK (rc = 0) 22:38:28 DEBUG --- stdout --- 22:38:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 10m 3328Mi am-55f77847b7-79tz5 16m 2685Mi am-55f77847b7-c4982 19m 2397Mi ds-cts-0 8m 379Mi ds-cts-1 6m 365Mi ds-cts-2 7m 371Mi ds-idrepo-0 12m 13546Mi ds-idrepo-1 174m 10279Mi ds-idrepo-2 20m 10301Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 8m 3410Mi idm-65858d8c4c-wd2fd 12m 1354Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 2m 99Mi 22:38:28 DEBUG --- stderr --- 22:38:28 DEBUG 22:38:28 INFO [loop_until]: OK (rc = 0) 22:38:28 DEBUG --- stdout --- 22:38:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 4383Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 92m 0% 3555Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 73m 0% 3860Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 83m 0% 4746Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2131Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 2620Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 14122Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 73m 0% 10948Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 264m 1% 10925Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 75m 0% 1632Mi 2% 22:38:28 DEBUG --- stderr --- 22:38:28 DEBUG 22:39:28 INFO 22:39:28 INFO [loop_until]: kubectl --namespace=xlou top pods 22:39:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:39:28 INFO [loop_until]: OK (rc = 0) 22:39:28 DEBUG --- stdout --- 22:39:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 10m 3328Mi am-55f77847b7-79tz5 10m 2697Mi am-55f77847b7-c4982 10m 2414Mi ds-cts-0 7m 379Mi ds-cts-1 6m 365Mi ds-cts-2 7m 371Mi ds-idrepo-0 18m 13546Mi ds-idrepo-1 2713m 13196Mi ds-idrepo-2 17m 10302Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 10m 3415Mi idm-65858d8c4c-wd2fd 6m 1347Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1113m 379Mi 22:39:28 DEBUG --- stderr --- 22:39:28 DEBUG 22:39:28 INFO 22:39:28 INFO [loop_until]: kubectl --namespace=xlou top node 22:39:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:39:28 INFO [loop_until]: OK (rc = 0) 22:39:28 DEBUG --- stdout --- 22:39:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 4374Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 3560Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3871Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 4740Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2615Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 68m 0% 14132Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 10952Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2777m 17% 13907Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1133m 7% 1908Mi 3% 22:39:28 DEBUG --- stderr --- 22:39:28 DEBUG 22:40:28 INFO 22:40:28 INFO [loop_until]: kubectl --namespace=xlou top pods 22:40:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:40:28 INFO [loop_until]: OK (rc = 0) 22:40:28 DEBUG --- stdout --- 22:40:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 9m 3328Mi am-55f77847b7-79tz5 12m 2708Mi am-55f77847b7-c4982 11m 2427Mi ds-cts-0 6m 379Mi ds-cts-1 6m 365Mi ds-cts-2 7m 370Mi ds-idrepo-0 12m 13546Mi ds-idrepo-1 2769m 13391Mi ds-idrepo-2 13m 10302Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 8m 3415Mi idm-65858d8c4c-wd2fd 6m 1347Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1171m 379Mi 22:40:28 DEBUG --- stderr --- 22:40:28 DEBUG 22:40:28 INFO 22:40:28 INFO [loop_until]: kubectl --namespace=xlou top node 22:40:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:40:28 INFO [loop_until]: OK (rc = 0) 22:40:28 DEBUG --- stdout --- 22:40:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 4375Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 3568Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3881Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4739Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2613Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 14121Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 10953Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2794m 17% 13882Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1248m 7% 1907Mi 3% 22:40:28 DEBUG --- stderr --- 22:40:28 DEBUG 22:41:28 INFO 22:41:28 INFO [loop_until]: kubectl --namespace=xlou top pods 22:41:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:41:28 INFO [loop_until]: OK (rc = 0) 22:41:28 DEBUG --- stdout --- 22:41:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 14m 3328Mi am-55f77847b7-79tz5 13m 2719Mi am-55f77847b7-c4982 10m 2438Mi ds-cts-0 8m 379Mi ds-cts-1 11m 366Mi ds-cts-2 7m 371Mi ds-idrepo-0 16m 13547Mi ds-idrepo-1 2757m 13419Mi ds-idrepo-2 17m 10307Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 10m 3415Mi idm-65858d8c4c-wd2fd 7m 1347Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1262m 381Mi 22:41:28 DEBUG --- stderr --- 22:41:28 DEBUG 22:41:28 INFO 22:41:28 INFO [loop_until]: kubectl --namespace=xlou top node 22:41:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:41:28 INFO [loop_until]: OK (rc = 0) 22:41:28 DEBUG --- stdout --- 22:41:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 4375Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3583Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 3893Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4742Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2611Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 73m 0% 14123Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 10961Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2768m 17% 13981Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1345m 8% 1909Mi 3% 22:41:28 DEBUG --- stderr --- 22:41:28 DEBUG 22:42:28 INFO 22:42:28 INFO [loop_until]: kubectl --namespace=xlou top pods 22:42:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:42:28 INFO [loop_until]: OK (rc = 0) 22:42:28 DEBUG --- stdout --- 22:42:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 11m 3328Mi am-55f77847b7-79tz5 11m 2728Mi am-55f77847b7-c4982 10m 2446Mi ds-cts-0 19m 377Mi ds-cts-1 6m 365Mi ds-cts-2 8m 370Mi ds-idrepo-0 13m 13546Mi ds-idrepo-1 2949m 13499Mi ds-idrepo-2 21m 10308Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 12m 3415Mi idm-65858d8c4c-wd2fd 10m 1349Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1324m 381Mi 22:42:28 DEBUG --- stderr --- 22:42:28 DEBUG 22:42:28 INFO 22:42:28 INFO [loop_until]: kubectl --namespace=xlou top node 22:42:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:42:28 INFO [loop_until]: OK (rc = 0) 22:42:28 DEBUG --- stdout --- 22:42:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 4373Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3591Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3900Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4740Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2614Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 14127Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 70m 0% 10954Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3079m 19% 14145Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1397m 8% 1908Mi 3% 22:42:28 DEBUG --- stderr --- 22:42:28 DEBUG 22:43:28 INFO 22:43:28 INFO [loop_until]: kubectl --namespace=xlou top pods 22:43:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:43:28 INFO [loop_until]: OK (rc = 0) 22:43:28 DEBUG --- stdout --- 22:43:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 10m 3328Mi am-55f77847b7-79tz5 11m 2740Mi am-55f77847b7-c4982 10m 2459Mi ds-cts-0 8m 378Mi ds-cts-1 7m 366Mi ds-cts-2 15m 369Mi ds-idrepo-0 12m 13547Mi ds-idrepo-1 2894m 13621Mi ds-idrepo-2 13m 10309Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 9m 3415Mi idm-65858d8c4c-wd2fd 7m 1350Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1396m 381Mi 22:43:28 DEBUG --- stderr --- 22:43:28 DEBUG 22:43:28 INFO 22:43:28 INFO [loop_until]: kubectl --namespace=xlou top node 22:43:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:43:28 INFO [loop_until]: OK (rc = 0) 22:43:28 DEBUG --- stdout --- 22:43:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 4373Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 3601Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3911Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 4741Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2616Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 14127Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 10956Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2977m 18% 14173Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1486m 9% 1908Mi 3% 22:43:28 DEBUG --- stderr --- 22:43:28 DEBUG 22:44:28 INFO 22:44:28 INFO [loop_until]: kubectl --namespace=xlou top pods 22:44:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:44:28 INFO [loop_until]: OK (rc = 0) 22:44:28 DEBUG --- stdout --- 22:44:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 9m 3328Mi am-55f77847b7-79tz5 8m 2749Mi am-55f77847b7-c4982 9m 2470Mi ds-cts-0 6m 377Mi ds-cts-1 7m 366Mi ds-cts-2 7m 370Mi ds-idrepo-0 20m 13546Mi ds-idrepo-1 10m 13632Mi ds-idrepo-2 15m 10309Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 8m 3415Mi idm-65858d8c4c-wd2fd 6m 1350Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1m 99Mi 22:44:28 DEBUG --- stderr --- 22:44:28 DEBUG 22:44:28 INFO 22:44:28 INFO [loop_until]: kubectl --namespace=xlou top node 22:44:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:44:28 INFO [loop_until]: OK (rc = 0) 22:44:28 DEBUG --- stdout --- 22:44:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 4371Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 3614Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 3921Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4738Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2619Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 68m 0% 14128Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 10956Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14187Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1633Mi 2% 22:44:28 DEBUG --- stderr --- 22:44:28 DEBUG 22:45:28 INFO 22:45:28 INFO [loop_until]: kubectl --namespace=xlou top pods 22:45:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:45:28 INFO [loop_until]: OK (rc = 0) 22:45:28 DEBUG --- stdout --- 22:45:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 10m 3328Mi am-55f77847b7-79tz5 22m 2762Mi am-55f77847b7-c4982 10m 2480Mi ds-cts-0 7m 377Mi ds-cts-1 9m 366Mi ds-cts-2 8m 370Mi ds-idrepo-0 18m 13547Mi ds-idrepo-1 18m 13631Mi ds-idrepo-2 2459m 12196Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 8m 3415Mi idm-65858d8c4c-wd2fd 6m 1350Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1081m 370Mi 22:45:28 DEBUG --- stderr --- 22:45:28 DEBUG 22:45:28 INFO 22:45:28 INFO [loop_until]: kubectl --namespace=xlou top node 22:45:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:45:29 INFO [loop_until]: OK (rc = 0) 22:45:29 DEBUG --- stdout --- 22:45:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 4373Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 77m 0% 3625Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 3934Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4738Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 65m 0% 2617Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 68m 0% 14128Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2749m 17% 12889Mi 21% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14184Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1145m 7% 1902Mi 3% 22:45:29 DEBUG --- stderr --- 22:45:29 DEBUG 22:46:28 INFO 22:46:28 INFO [loop_until]: kubectl --namespace=xlou top pods 22:46:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:46:29 INFO [loop_until]: OK (rc = 0) 22:46:29 DEBUG --- stdout --- 22:46:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 9m 3332Mi am-55f77847b7-79tz5 8m 2774Mi am-55f77847b7-c4982 7m 2491Mi ds-cts-0 6m 379Mi ds-cts-1 7m 365Mi ds-cts-2 9m 369Mi ds-idrepo-0 13m 13547Mi ds-idrepo-1 10m 13631Mi ds-idrepo-2 2852m 13347Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 8m 3416Mi idm-65858d8c4c-wd2fd 6m 1350Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1166m 369Mi 22:46:29 DEBUG --- stderr --- 22:46:29 DEBUG 22:46:29 INFO 22:46:29 INFO [loop_until]: kubectl --namespace=xlou top node 22:46:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:46:29 INFO [loop_until]: OK (rc = 0) 22:46:29 DEBUG --- stdout --- 22:46:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 4377Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 3638Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3945Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4740Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2619Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 14125Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2829m 17% 13876Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14183Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1216m 7% 1902Mi 3% 22:46:29 DEBUG --- stderr --- 22:46:29 DEBUG 22:47:29 INFO 22:47:29 INFO [loop_until]: kubectl --namespace=xlou top pods 22:47:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:47:29 INFO [loop_until]: OK (rc = 0) 22:47:29 DEBUG --- stdout --- 22:47:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 8m 3331Mi am-55f77847b7-79tz5 9m 2786Mi am-55f77847b7-c4982 11m 2502Mi ds-cts-0 7m 378Mi ds-cts-1 8m 366Mi ds-cts-2 7m 371Mi ds-idrepo-0 13m 13547Mi ds-idrepo-1 19m 13634Mi ds-idrepo-2 2920m 13425Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 12m 3417Mi idm-65858d8c4c-wd2fd 10m 1350Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1215m 372Mi 22:47:29 DEBUG --- stderr --- 22:47:29 DEBUG 22:47:29 INFO 22:47:29 INFO [loop_until]: kubectl --namespace=xlou top node 22:47:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:47:29 INFO [loop_until]: OK (rc = 0) 22:47:29 DEBUG --- stdout --- 22:47:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 4377Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 81m 0% 3647Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3959Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4741Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2130Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2616Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 14129Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2991m 18% 13987Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14189Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1310m 8% 1902Mi 3% 22:47:29 DEBUG --- stderr --- 22:47:29 DEBUG 22:48:29 INFO 22:48:29 INFO [loop_until]: kubectl --namespace=xlou top pods 22:48:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:48:29 INFO [loop_until]: OK (rc = 0) 22:48:29 DEBUG --- stdout --- 22:48:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 15m 3339Mi am-55f77847b7-79tz5 13m 2820Mi am-55f77847b7-c4982 11m 2513Mi ds-cts-0 7m 378Mi ds-cts-1 8m 366Mi ds-cts-2 9m 371Mi ds-idrepo-0 12m 13547Mi ds-idrepo-1 20m 13633Mi ds-idrepo-2 2806m 13461Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 10m 3417Mi idm-65858d8c4c-wd2fd 7m 1351Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1295m 372Mi 22:48:29 DEBUG --- stderr --- 22:48:29 DEBUG 22:48:29 INFO 22:48:29 INFO [loop_until]: kubectl --namespace=xlou top node 22:48:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:48:29 INFO [loop_until]: OK (rc = 0) 22:48:29 DEBUG --- stdout --- 22:48:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 4383Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 3660Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 3992Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4744Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2617Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 14126Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2897m 18% 14022Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14189Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1400m 8% 1903Mi 3% 22:48:29 DEBUG --- stderr --- 22:48:29 DEBUG 22:49:29 INFO 22:49:29 INFO [loop_until]: kubectl --namespace=xlou top pods 22:49:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:49:29 INFO [loop_until]: OK (rc = 0) 22:49:29 DEBUG --- stdout --- 22:49:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 10m 3339Mi am-55f77847b7-79tz5 10m 2832Mi am-55f77847b7-c4982 11m 2526Mi ds-cts-0 7m 378Mi ds-cts-1 7m 366Mi ds-cts-2 7m 372Mi ds-idrepo-0 13m 13547Mi ds-idrepo-1 10m 13633Mi ds-idrepo-2 3170m 13616Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 8m 3417Mi idm-65858d8c4c-wd2fd 14m 1353Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1393m 372Mi 22:49:29 DEBUG --- stderr --- 22:49:29 DEBUG 22:49:29 INFO 22:49:29 INFO [loop_until]: kubectl --namespace=xlou top node 22:49:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:49:29 INFO [loop_until]: OK (rc = 0) 22:49:29 DEBUG --- stdout --- 22:49:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 4384Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 3673Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 4003Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4743Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 2622Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 14125Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3258m 20% 14169Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14191Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1465m 9% 1903Mi 3% 22:49:29 DEBUG --- stderr --- 22:49:29 DEBUG 22:50:29 INFO 22:50:29 INFO [loop_until]: kubectl --namespace=xlou top pods 22:50:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:50:29 INFO [loop_until]: OK (rc = 0) 22:50:29 DEBUG --- stdout --- 22:50:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 9m 3339Mi am-55f77847b7-79tz5 8m 2844Mi am-55f77847b7-c4982 9m 2538Mi ds-cts-0 7m 378Mi ds-cts-1 6m 367Mi ds-cts-2 8m 371Mi ds-idrepo-0 12m 13547Mi ds-idrepo-1 10m 13633Mi ds-idrepo-2 11m 13615Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 9m 3418Mi idm-65858d8c4c-wd2fd 7m 1354Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1m 99Mi 22:50:29 DEBUG --- stderr --- 22:50:29 DEBUG 22:50:29 INFO 22:50:29 INFO [loop_until]: kubectl --namespace=xlou top node 22:50:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:50:29 INFO [loop_until]: OK (rc = 0) 22:50:29 DEBUG --- stdout --- 22:50:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 4385Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 3684Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 4015Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4744Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2131Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2622Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 14128Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14170Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 56m 0% 14187Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1633Mi 2% 22:50:29 DEBUG --- stderr --- 22:50:29 DEBUG 22:51:29 INFO 22:51:29 INFO [loop_until]: kubectl --namespace=xlou top pods 22:51:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:51:29 INFO [loop_until]: OK (rc = 0) 22:51:29 DEBUG --- stdout --- 22:51:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 126m 3352Mi am-55f77847b7-79tz5 100m 3014Mi am-55f77847b7-c4982 10m 2546Mi ds-cts-0 9m 380Mi ds-cts-1 8m 368Mi ds-cts-2 11m 372Mi ds-idrepo-0 12m 13547Mi ds-idrepo-1 583m 13635Mi ds-idrepo-2 448m 13629Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2590m 3898Mi idm-65858d8c4c-wd2fd 573m 2839Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 2097m 513Mi 22:51:29 DEBUG --- stderr --- 22:51:29 DEBUG 22:51:29 INFO 22:51:29 INFO [loop_until]: kubectl --namespace=xlou top node 22:51:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:51:29 INFO [loop_until]: OK (rc = 0) 22:51:29 DEBUG --- stdout --- 22:51:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 183m 1% 4397Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 191m 1% 3870Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 181m 1% 4159Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 2109m 13% 5222Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 165m 1% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3291m 20% 5040Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 639m 4% 14128Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 195m 1% 14176Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 281m 1% 14201Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2118m 13% 2051Mi 3% 22:51:29 DEBUG --- stderr --- 22:51:29 DEBUG 22:52:29 INFO 22:52:29 INFO [loop_until]: kubectl --namespace=xlou top pods 22:52:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:52:29 INFO [loop_until]: OK (rc = 0) 22:52:29 DEBUG --- stdout --- 22:52:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 109m 4388Mi am-55f77847b7-79tz5 111m 4137Mi am-55f77847b7-c4982 106m 4094Mi ds-cts-0 7m 380Mi ds-cts-1 8m 369Mi ds-cts-2 7m 372Mi ds-idrepo-0 7000m 13778Mi ds-idrepo-1 1583m 13643Mi ds-idrepo-2 1559m 13690Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7439m 4153Mi idm-65858d8c4c-wd2fd 7112m 4015Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 967m 542Mi 22:52:29 DEBUG --- stderr --- 22:52:29 DEBUG 22:52:29 INFO 22:52:29 INFO [loop_until]: kubectl --namespace=xlou top node 22:52:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:52:29 INFO [loop_until]: OK (rc = 0) 22:52:29 DEBUG --- stdout --- 22:52:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 165m 1% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 177m 1% 5272Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 168m 1% 5544Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 7428m 46% 5470Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1887m 11% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7510m 47% 5278Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7059m 44% 14344Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1609m 10% 14233Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1586m 9% 14190Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1066m 6% 2070Mi 3% 22:52:29 DEBUG --- stderr --- 22:52:29 DEBUG 22:53:29 INFO 22:53:29 INFO [loop_until]: kubectl --namespace=xlou top pods 22:53:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:53:29 INFO [loop_until]: OK (rc = 0) 22:53:29 DEBUG --- stdout --- 22:53:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 107m 5714Mi am-55f77847b7-79tz5 109m 5706Mi am-55f77847b7-c4982 98m 5390Mi ds-cts-0 6m 380Mi ds-cts-1 7m 368Mi ds-cts-2 7m 372Mi ds-idrepo-0 8134m 13820Mi ds-idrepo-1 1878m 13780Mi ds-idrepo-2 1921m 13756Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6875m 4171Mi idm-65858d8c4c-wd2fd 7376m 4064Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 988m 546Mi 22:53:29 DEBUG --- stderr --- 22:53:29 DEBUG 22:53:29 INFO 22:53:29 INFO [loop_until]: kubectl --namespace=xlou top node 22:53:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:53:30 INFO [loop_until]: OK (rc = 0) 22:53:30 DEBUG --- stdout --- 22:53:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 172m 1% 6754Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 175m 1% 6459Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 159m 1% 6877Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6987m 43% 5498Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1878m 11% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7524m 47% 5325Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7702m 48% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2099m 13% 14295Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2125m 13% 14312Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1031m 6% 2072Mi 3% 22:53:30 DEBUG --- stderr --- 22:53:30 DEBUG 22:54:29 INFO 22:54:29 INFO [loop_until]: kubectl --namespace=xlou top pods 22:54:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:54:29 INFO [loop_until]: OK (rc = 0) 22:54:29 DEBUG --- stdout --- 22:54:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 85m 5715Mi am-55f77847b7-79tz5 93m 5705Mi am-55f77847b7-c4982 82m 5646Mi ds-cts-0 6m 381Mi ds-cts-1 7m 370Mi ds-cts-2 7m 372Mi ds-idrepo-0 7289m 13821Mi ds-idrepo-1 1767m 13828Mi ds-idrepo-2 1682m 13777Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7158m 4187Mi idm-65858d8c4c-wd2fd 7190m 4077Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 3Mi overseer-0-c77c496cb-dtn6s 1092m 546Mi 22:54:29 DEBUG --- stderr --- 22:54:29 DEBUG 22:54:30 INFO 22:54:30 INFO [loop_until]: kubectl --namespace=xlou top node 22:54:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:54:30 INFO [loop_until]: OK (rc = 0) 22:54:30 DEBUG --- stdout --- 22:54:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 146m 0% 6753Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6786Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7372m 46% 5500Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1922m 12% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7466m 46% 5355Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7454m 46% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1657m 10% 14310Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1629m 10% 14353Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1117m 7% 2070Mi 3% 22:54:30 DEBUG --- stderr --- 22:54:30 DEBUG 22:55:29 INFO 22:55:29 INFO [loop_until]: kubectl --namespace=xlou top pods 22:55:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:55:29 INFO [loop_until]: OK (rc = 0) 22:55:29 DEBUG --- stdout --- 22:55:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 87m 5719Mi am-55f77847b7-79tz5 88m 5735Mi am-55f77847b7-c4982 82m 5670Mi ds-cts-0 7m 381Mi ds-cts-1 7m 368Mi ds-cts-2 9m 372Mi ds-idrepo-0 7980m 13823Mi ds-idrepo-1 2068m 13833Mi ds-idrepo-2 1969m 13795Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6749m 4218Mi idm-65858d8c4c-wd2fd 7113m 4108Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 975m 547Mi 22:55:29 DEBUG --- stderr --- 22:55:29 DEBUG 22:55:30 INFO 22:55:30 INFO [loop_until]: kubectl --namespace=xlou top node 22:55:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:55:30 INFO [loop_until]: OK (rc = 0) 22:55:30 DEBUG --- stdout --- 22:55:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 148m 0% 6757Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 145m 0% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 145m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7226m 45% 5533Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1893m 11% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7433m 46% 5368Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8010m 50% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2032m 12% 14326Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2031m 12% 14354Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1092m 6% 2069Mi 3% 22:55:30 DEBUG --- stderr --- 22:55:30 DEBUG 22:56:29 INFO 22:56:29 INFO [loop_until]: kubectl --namespace=xlou top pods 22:56:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:56:30 INFO [loop_until]: OK (rc = 0) 22:56:30 DEBUG --- stdout --- 22:56:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 93m 5721Mi am-55f77847b7-79tz5 90m 5741Mi am-55f77847b7-c4982 88m 5671Mi ds-cts-0 6m 381Mi ds-cts-1 7m 368Mi ds-cts-2 7m 372Mi ds-idrepo-0 8357m 13817Mi ds-idrepo-1 2161m 13819Mi ds-idrepo-2 2105m 13823Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6931m 4244Mi idm-65858d8c4c-wd2fd 7219m 4120Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 989m 547Mi 22:56:30 DEBUG --- stderr --- 22:56:30 DEBUG 22:56:30 INFO 22:56:30 INFO [loop_until]: kubectl --namespace=xlou top node 22:56:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:56:30 INFO [loop_until]: OK (rc = 0) 22:56:30 DEBUG --- stdout --- 22:56:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 148m 0% 6759Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7188m 45% 5557Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1888m 11% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7220m 45% 5376Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8468m 53% 14365Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2420m 15% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2355m 14% 14335Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1074m 6% 2071Mi 3% 22:56:30 DEBUG --- stderr --- 22:56:30 DEBUG 22:57:30 INFO 22:57:30 INFO [loop_until]: kubectl --namespace=xlou top pods 22:57:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:57:30 INFO [loop_until]: OK (rc = 0) 22:57:30 DEBUG --- stdout --- 22:57:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 84m 5758Mi am-55f77847b7-79tz5 77m 5777Mi am-55f77847b7-c4982 81m 5708Mi ds-cts-0 6m 381Mi ds-cts-1 8m 368Mi ds-cts-2 7m 372Mi ds-idrepo-0 8163m 13806Mi ds-idrepo-1 2476m 13811Mi ds-idrepo-2 2814m 13760Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6835m 4272Mi idm-65858d8c4c-wd2fd 6857m 4133Mi lodemon-755c6d9977-9wwrg 1m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 987m 548Mi 22:57:30 DEBUG --- stderr --- 22:57:30 DEBUG 22:57:30 INFO 22:57:30 INFO [loop_until]: kubectl --namespace=xlou top node 22:57:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:57:30 INFO [loop_until]: OK (rc = 0) 22:57:30 DEBUG --- stdout --- 22:57:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6799Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 141m 0% 6848Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 137m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7289m 45% 5583Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1869m 11% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7271m 45% 5393Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8210m 51% 14351Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2396m 15% 14324Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2598m 16% 14352Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1069m 6% 2073Mi 3% 22:57:30 DEBUG --- stderr --- 22:57:30 DEBUG 22:58:30 INFO 22:58:30 INFO [loop_until]: kubectl --namespace=xlou top pods 22:58:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:58:30 INFO [loop_until]: OK (rc = 0) 22:58:30 DEBUG --- stdout --- 22:58:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 83m 5758Mi am-55f77847b7-79tz5 85m 5777Mi am-55f77847b7-c4982 91m 5702Mi ds-cts-0 6m 381Mi ds-cts-1 11m 369Mi ds-cts-2 11m 372Mi ds-idrepo-0 8238m 13830Mi ds-idrepo-1 2003m 13817Mi ds-idrepo-2 2249m 13830Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6900m 4295Mi idm-65858d8c4c-wd2fd 6675m 4144Mi lodemon-755c6d9977-9wwrg 1m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 985m 549Mi 22:58:30 DEBUG --- stderr --- 22:58:30 DEBUG 22:58:30 INFO 22:58:30 INFO [loop_until]: kubectl --namespace=xlou top node 22:58:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:58:30 INFO [loop_until]: OK (rc = 0) 22:58:30 DEBUG --- stdout --- 22:58:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6799Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6838Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 144m 0% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7206m 45% 5610Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1887m 11% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7343m 46% 5406Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8048m 50% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2106m 13% 14352Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2118m 13% 14351Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1071m 6% 2071Mi 3% 22:58:30 DEBUG --- stderr --- 22:58:30 DEBUG 22:59:30 INFO 22:59:30 INFO [loop_until]: kubectl --namespace=xlou top pods 22:59:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:59:30 INFO [loop_until]: OK (rc = 0) 22:59:30 DEBUG --- stdout --- 22:59:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 83m 5759Mi am-55f77847b7-79tz5 80m 5777Mi am-55f77847b7-c4982 79m 5702Mi ds-cts-0 6m 382Mi ds-cts-1 7m 368Mi ds-cts-2 7m 372Mi ds-idrepo-0 7945m 13786Mi ds-idrepo-1 2415m 13823Mi ds-idrepo-2 2413m 13824Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6821m 4321Mi idm-65858d8c4c-wd2fd 7095m 4160Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 996m 549Mi 22:59:30 DEBUG --- stderr --- 22:59:30 DEBUG 22:59:30 INFO 22:59:30 INFO [loop_until]: kubectl --namespace=xlou top node 22:59:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:59:30 INFO [loop_until]: OK (rc = 0) 22:59:30 DEBUG --- stdout --- 22:59:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 145m 0% 6797Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7077m 44% 5634Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1870m 11% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7416m 46% 5414Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8213m 51% 14367Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2508m 15% 14343Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2478m 15% 14346Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1049m 6% 2073Mi 3% 22:59:30 DEBUG --- stderr --- 22:59:30 DEBUG 23:00:30 INFO 23:00:30 INFO [loop_until]: kubectl --namespace=xlou top pods 23:00:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:00:30 INFO [loop_until]: OK (rc = 0) 23:00:30 DEBUG --- stdout --- 23:00:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 82m 5764Mi am-55f77847b7-79tz5 92m 5781Mi am-55f77847b7-c4982 82m 5707Mi ds-cts-0 7m 381Mi ds-cts-1 10m 368Mi ds-cts-2 7m 372Mi ds-idrepo-0 8173m 13828Mi ds-idrepo-1 2287m 13823Mi ds-idrepo-2 2664m 13816Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6800m 4347Mi idm-65858d8c4c-wd2fd 7164m 4182Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 969m 550Mi 23:00:30 DEBUG --- stderr --- 23:00:30 DEBUG 23:00:30 INFO 23:00:30 INFO [loop_until]: kubectl --namespace=xlou top node 23:00:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:00:30 INFO [loop_until]: OK (rc = 0) 23:00:30 DEBUG --- stdout --- 23:00:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 144m 0% 6803Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6843Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 155m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7045m 44% 5656Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1897m 11% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7185m 45% 5441Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8125m 51% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2591m 16% 14342Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2542m 15% 14359Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1060m 6% 2070Mi 3% 23:00:30 DEBUG --- stderr --- 23:00:30 DEBUG 23:01:30 INFO 23:01:30 INFO [loop_until]: kubectl --namespace=xlou top pods 23:01:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:01:30 INFO [loop_until]: OK (rc = 0) 23:01:30 DEBUG --- stdout --- 23:01:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 90m 5765Mi am-55f77847b7-79tz5 81m 5781Mi am-55f77847b7-c4982 81m 5707Mi ds-cts-0 7m 381Mi ds-cts-1 14m 368Mi ds-cts-2 8m 372Mi ds-idrepo-0 8466m 13794Mi ds-idrepo-1 2455m 13810Mi ds-idrepo-2 2531m 13847Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6859m 4368Mi idm-65858d8c4c-wd2fd 7369m 4207Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 991m 550Mi 23:01:30 DEBUG --- stderr --- 23:01:30 DEBUG 23:01:30 INFO 23:01:30 INFO [loop_until]: kubectl --namespace=xlou top node 23:01:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:01:30 INFO [loop_until]: OK (rc = 0) 23:01:30 DEBUG --- stdout --- 23:01:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 150m 0% 6804Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 137m 0% 6847Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6947Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7387m 46% 5687Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1912m 12% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7372m 46% 5467Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8640m 54% 14352Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2562m 16% 14377Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2595m 16% 14367Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1092m 6% 2073Mi 3% 23:01:30 DEBUG --- stderr --- 23:01:30 DEBUG 23:02:30 INFO 23:02:30 INFO [loop_until]: kubectl --namespace=xlou top pods 23:02:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:02:30 INFO [loop_until]: OK (rc = 0) 23:02:30 DEBUG --- stdout --- 23:02:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 95m 5765Mi am-55f77847b7-79tz5 84m 5782Mi am-55f77847b7-c4982 89m 5707Mi ds-cts-0 8m 382Mi ds-cts-1 9m 369Mi ds-cts-2 9m 373Mi ds-idrepo-0 8621m 13818Mi ds-idrepo-1 2084m 13817Mi ds-idrepo-2 2093m 13833Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6883m 4398Mi idm-65858d8c4c-wd2fd 7377m 4237Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 983m 551Mi 23:02:30 DEBUG --- stderr --- 23:02:30 DEBUG 23:02:31 INFO 23:02:31 INFO [loop_until]: kubectl --namespace=xlou top node 23:02:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:02:31 INFO [loop_until]: OK (rc = 0) 23:02:31 DEBUG --- stdout --- 23:02:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 149m 0% 6849Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6947Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7245m 45% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1918m 12% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7685m 48% 5496Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8918m 56% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2206m 13% 14329Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2303m 14% 14354Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1067m 6% 2072Mi 3% 23:02:31 DEBUG --- stderr --- 23:02:31 DEBUG 23:03:30 INFO 23:03:30 INFO [loop_until]: kubectl --namespace=xlou top pods 23:03:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:03:30 INFO [loop_until]: OK (rc = 0) 23:03:30 DEBUG --- stdout --- 23:03:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 84m 5765Mi am-55f77847b7-79tz5 87m 5783Mi am-55f77847b7-c4982 95m 5710Mi ds-cts-0 6m 382Mi ds-cts-1 7m 368Mi ds-cts-2 6m 373Mi ds-idrepo-0 8591m 13808Mi ds-idrepo-1 2209m 13788Mi ds-idrepo-2 2576m 13829Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7193m 4425Mi idm-65858d8c4c-wd2fd 7087m 4262Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 987m 550Mi 23:03:30 DEBUG --- stderr --- 23:03:30 DEBUG 23:03:31 INFO 23:03:31 INFO [loop_until]: kubectl --namespace=xlou top node 23:03:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:03:31 INFO [loop_until]: OK (rc = 0) 23:03:31 DEBUG --- stdout --- 23:03:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 141m 0% 6803Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 157m 0% 6851Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 137m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7358m 46% 5738Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1918m 12% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7464m 46% 5520Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8792m 55% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2367m 14% 14306Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2368m 14% 14322Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1071m 6% 2070Mi 3% 23:03:31 DEBUG --- stderr --- 23:03:31 DEBUG 23:04:30 INFO 23:04:30 INFO [loop_until]: kubectl --namespace=xlou top pods 23:04:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:04:30 INFO [loop_until]: OK (rc = 0) 23:04:30 DEBUG --- stdout --- 23:04:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 84m 5765Mi am-55f77847b7-79tz5 81m 5783Mi am-55f77847b7-c4982 81m 5710Mi ds-cts-0 6m 382Mi ds-cts-1 7m 368Mi ds-cts-2 6m 373Mi ds-idrepo-0 8488m 13851Mi ds-idrepo-1 2409m 13807Mi ds-idrepo-2 2247m 13824Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6979m 4449Mi idm-65858d8c4c-wd2fd 7317m 4288Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 3Mi overseer-0-c77c496cb-dtn6s 1018m 551Mi 23:04:30 DEBUG --- stderr --- 23:04:30 DEBUG 23:04:31 INFO 23:04:31 INFO [loop_until]: kubectl --namespace=xlou top node 23:04:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:04:31 INFO [loop_until]: OK (rc = 0) 23:04:31 DEBUG --- stdout --- 23:04:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 144m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 142m 0% 6847Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7381m 46% 5762Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1914m 12% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7443m 46% 5556Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8451m 53% 14367Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2533m 15% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2241m 14% 14349Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1095m 6% 2074Mi 3% 23:04:31 DEBUG --- stderr --- 23:04:31 DEBUG 23:05:30 INFO 23:05:30 INFO [loop_until]: kubectl --namespace=xlou top pods 23:05:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:05:30 INFO [loop_until]: OK (rc = 0) 23:05:30 DEBUG --- stdout --- 23:05:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 85m 5766Mi am-55f77847b7-79tz5 82m 5786Mi am-55f77847b7-c4982 82m 5712Mi ds-cts-0 9m 382Mi ds-cts-1 6m 368Mi ds-cts-2 6m 373Mi ds-idrepo-0 8064m 13823Mi ds-idrepo-1 2312m 13824Mi ds-idrepo-2 1964m 13823Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7078m 4468Mi idm-65858d8c4c-wd2fd 7093m 4316Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 987m 552Mi 23:05:31 DEBUG --- stderr --- 23:05:31 DEBUG 23:05:31 INFO 23:05:31 INFO [loop_until]: kubectl --namespace=xlou top node 23:05:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:05:31 INFO [loop_until]: OK (rc = 0) 23:05:31 DEBUG --- stdout --- 23:05:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 145m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 145m 0% 6850Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6954Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7309m 45% 5788Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1912m 12% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7379m 46% 5570Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8187m 51% 14379Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2058m 12% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2025m 12% 14356Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1070m 6% 2076Mi 3% 23:05:31 DEBUG --- stderr --- 23:05:31 DEBUG 23:06:31 INFO 23:06:31 INFO [loop_until]: kubectl --namespace=xlou top pods 23:06:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:06:31 INFO [loop_until]: OK (rc = 0) 23:06:31 DEBUG --- stdout --- 23:06:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 86m 5766Mi am-55f77847b7-79tz5 83m 5788Mi am-55f77847b7-c4982 82m 5714Mi ds-cts-0 12m 382Mi ds-cts-1 7m 369Mi ds-cts-2 6m 373Mi ds-idrepo-0 9619m 13808Mi ds-idrepo-1 2767m 13812Mi ds-idrepo-2 2645m 13774Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7219m 4493Mi idm-65858d8c4c-wd2fd 7181m 4341Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1005m 553Mi 23:06:31 DEBUG --- stderr --- 23:06:31 DEBUG 23:06:31 INFO 23:06:31 INFO [loop_until]: kubectl --namespace=xlou top node 23:06:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:06:31 INFO [loop_until]: OK (rc = 0) 23:06:31 DEBUG --- stdout --- 23:06:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 147m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 146m 0% 6852Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 142m 0% 6954Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7337m 46% 5814Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1838m 11% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7574m 47% 5596Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9467m 59% 14334Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2849m 17% 14305Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2851m 17% 14304Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1032m 6% 2074Mi 3% 23:06:31 DEBUG --- stderr --- 23:06:31 DEBUG 23:07:31 INFO 23:07:31 INFO [loop_until]: kubectl --namespace=xlou top pods 23:07:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:07:31 INFO [loop_until]: OK (rc = 0) 23:07:31 DEBUG --- stdout --- 23:07:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 82m 5766Mi am-55f77847b7-79tz5 90m 5787Mi am-55f77847b7-c4982 85m 5716Mi ds-cts-0 6m 382Mi ds-cts-1 7m 369Mi ds-cts-2 6m 373Mi ds-idrepo-0 7807m 13810Mi ds-idrepo-1 1823m 13827Mi ds-idrepo-2 1847m 13823Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7131m 4523Mi idm-65858d8c4c-wd2fd 7121m 4361Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 988m 553Mi 23:07:31 DEBUG --- stderr --- 23:07:31 DEBUG 23:07:31 INFO 23:07:31 INFO [loop_until]: kubectl --namespace=xlou top node 23:07:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:07:31 INFO [loop_until]: OK (rc = 0) 23:07:31 DEBUG --- stdout --- 23:07:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 143m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7454m 46% 5838Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1895m 11% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7225m 45% 5623Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8063m 50% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1973m 12% 14352Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1928m 12% 14364Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1070m 6% 2077Mi 3% 23:07:31 DEBUG --- stderr --- 23:07:31 DEBUG 23:08:31 INFO 23:08:31 INFO [loop_until]: kubectl --namespace=xlou top pods 23:08:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:08:31 INFO [loop_until]: OK (rc = 0) 23:08:31 DEBUG --- stdout --- 23:08:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 87m 5766Mi am-55f77847b7-79tz5 89m 5787Mi am-55f77847b7-c4982 81m 5717Mi ds-cts-0 6m 382Mi ds-cts-1 17m 366Mi ds-cts-2 6m 373Mi ds-idrepo-0 9051m 13821Mi ds-idrepo-1 2316m 13831Mi ds-idrepo-2 2609m 13821Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7174m 4550Mi idm-65858d8c4c-wd2fd 7244m 4391Mi lodemon-755c6d9977-9wwrg 1m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1001m 554Mi 23:08:31 DEBUG --- stderr --- 23:08:31 DEBUG 23:08:31 INFO 23:08:31 INFO [loop_until]: kubectl --namespace=xlou top node 23:08:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:08:31 INFO [loop_until]: OK (rc = 0) 23:08:31 DEBUG --- stdout --- 23:08:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 147m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 143m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7445m 46% 5865Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1833m 11% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7454m 46% 5650Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9080m 57% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2507m 15% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2326m 14% 14360Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1071m 6% 2074Mi 3% 23:08:31 DEBUG --- stderr --- 23:08:31 DEBUG 23:09:31 INFO 23:09:31 INFO [loop_until]: kubectl --namespace=xlou top pods 23:09:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:09:31 INFO [loop_until]: OK (rc = 0) 23:09:31 DEBUG --- stdout --- 23:09:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 86m 5766Mi am-55f77847b7-79tz5 93m 5787Mi am-55f77847b7-c4982 84m 5717Mi ds-cts-0 6m 383Mi ds-cts-1 9m 366Mi ds-cts-2 6m 373Mi ds-idrepo-0 8176m 13820Mi ds-idrepo-1 2303m 13789Mi ds-idrepo-2 2266m 13821Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6892m 4575Mi idm-65858d8c4c-wd2fd 7152m 4415Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 967m 554Mi 23:09:31 DEBUG --- stderr --- 23:09:31 DEBUG 23:09:31 INFO 23:09:31 INFO [loop_until]: kubectl --namespace=xlou top node 23:09:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:09:31 INFO [loop_until]: OK (rc = 0) 23:09:31 DEBUG --- stdout --- 23:09:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 147m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 153m 0% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7308m 45% 5892Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1895m 11% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7418m 46% 5676Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8615m 54% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2305m 14% 14346Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2452m 15% 14330Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1073m 6% 2075Mi 3% 23:09:31 DEBUG --- stderr --- 23:09:31 DEBUG 23:10:31 INFO 23:10:31 INFO [loop_until]: kubectl --namespace=xlou top pods 23:10:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:10:31 INFO [loop_until]: OK (rc = 0) 23:10:31 DEBUG --- stdout --- 23:10:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 84m 5766Mi am-55f77847b7-79tz5 89m 5788Mi am-55f77847b7-c4982 87m 5715Mi ds-cts-0 8m 382Mi ds-cts-1 8m 368Mi ds-cts-2 6m 373Mi ds-idrepo-0 8867m 13780Mi ds-idrepo-1 2102m 13810Mi ds-idrepo-2 2226m 13784Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6946m 4600Mi idm-65858d8c4c-wd2fd 7064m 4448Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1021m 554Mi 23:10:31 DEBUG --- stderr --- 23:10:31 DEBUG 23:10:32 INFO 23:10:32 INFO [loop_until]: kubectl --namespace=xlou top node 23:10:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:10:32 INFO [loop_until]: OK (rc = 0) 23:10:32 DEBUG --- stdout --- 23:10:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 139m 0% 6804Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7151m 45% 5919Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1844m 11% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7500m 47% 5704Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8901m 56% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2230m 14% 14306Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2203m 13% 14333Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1081m 6% 2073Mi 3% 23:10:32 DEBUG --- stderr --- 23:10:32 DEBUG 23:11:31 INFO 23:11:31 INFO [loop_until]: kubectl --namespace=xlou top pods 23:11:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:11:31 INFO [loop_until]: OK (rc = 0) 23:11:31 DEBUG --- stdout --- 23:11:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 81m 5766Mi am-55f77847b7-79tz5 85m 5787Mi am-55f77847b7-c4982 78m 5715Mi ds-cts-0 7m 382Mi ds-cts-1 7m 366Mi ds-cts-2 6m 373Mi ds-idrepo-0 8596m 13824Mi ds-idrepo-1 2632m 13810Mi ds-idrepo-2 2631m 13848Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6848m 4624Mi idm-65858d8c4c-wd2fd 7032m 4469Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1018m 555Mi 23:11:31 DEBUG --- stderr --- 23:11:31 DEBUG 23:11:32 INFO 23:11:32 INFO [loop_until]: kubectl --namespace=xlou top node 23:11:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:11:32 INFO [loop_until]: OK (rc = 0) 23:11:32 DEBUG --- stdout --- 23:11:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 142m 0% 6802Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 146m 0% 6851Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6954Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7429m 46% 5942Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1903m 11% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7286m 45% 5727Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8928m 56% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2574m 16% 14329Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2688m 16% 14316Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1082m 6% 2074Mi 3% 23:11:32 DEBUG --- stderr --- 23:11:32 DEBUG 23:12:31 INFO 23:12:31 INFO [loop_until]: kubectl --namespace=xlou top pods 23:12:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:12:31 INFO [loop_until]: OK (rc = 0) 23:12:31 DEBUG --- stdout --- 23:12:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 84m 5766Mi am-55f77847b7-79tz5 83m 5787Mi am-55f77847b7-c4982 85m 5715Mi ds-cts-0 6m 382Mi ds-cts-1 7m 366Mi ds-cts-2 6m 373Mi ds-idrepo-0 8391m 13841Mi ds-idrepo-1 2274m 13833Mi ds-idrepo-2 2140m 13834Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6876m 4649Mi idm-65858d8c4c-wd2fd 7232m 4499Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 971m 555Mi 23:12:31 DEBUG --- stderr --- 23:12:31 DEBUG 23:12:32 INFO 23:12:32 INFO [loop_until]: kubectl --namespace=xlou top node 23:12:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:12:32 INFO [loop_until]: OK (rc = 0) 23:12:32 DEBUG --- stdout --- 23:12:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 146m 0% 6865Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 142m 0% 6954Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7197m 45% 5970Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1825m 11% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7442m 46% 5752Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8114m 51% 14413Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1967m 12% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1963m 12% 14392Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1052m 6% 2075Mi 3% 23:12:32 DEBUG --- stderr --- 23:12:32 DEBUG 23:13:32 INFO 23:13:32 INFO [loop_until]: kubectl --namespace=xlou top pods 23:13:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:13:32 INFO [loop_until]: OK (rc = 0) 23:13:32 DEBUG --- stdout --- 23:13:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 80m 5766Mi am-55f77847b7-79tz5 80m 5788Mi am-55f77847b7-c4982 78m 5715Mi ds-cts-0 7m 383Mi ds-cts-1 8m 366Mi ds-cts-2 8m 373Mi ds-idrepo-0 8193m 13844Mi ds-idrepo-1 1843m 13851Mi ds-idrepo-2 2021m 13836Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6997m 4671Mi idm-65858d8c4c-wd2fd 7203m 4523Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 967m 556Mi 23:13:32 DEBUG --- stderr --- 23:13:32 DEBUG 23:13:32 INFO 23:13:32 INFO [loop_until]: kubectl --namespace=xlou top node 23:13:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:13:32 INFO [loop_until]: OK (rc = 0) 23:13:32 DEBUG --- stdout --- 23:13:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 138m 0% 6849Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 136m 0% 6954Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6973m 43% 5993Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1876m 11% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7407m 46% 5780Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7821m 49% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2010m 12% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1962m 12% 14388Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1061m 6% 2076Mi 3% 23:13:32 DEBUG --- stderr --- 23:13:32 DEBUG 23:14:32 INFO 23:14:32 INFO [loop_until]: kubectl --namespace=xlou top pods 23:14:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:14:32 INFO [loop_until]: OK (rc = 0) 23:14:32 DEBUG --- stdout --- 23:14:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 82m 5766Mi am-55f77847b7-79tz5 77m 5788Mi am-55f77847b7-c4982 79m 5719Mi ds-cts-0 14m 382Mi ds-cts-1 7m 367Mi ds-cts-2 7m 374Mi ds-idrepo-0 8238m 13848Mi ds-idrepo-1 1844m 13844Mi ds-idrepo-2 1860m 13826Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7035m 4700Mi idm-65858d8c4c-wd2fd 7019m 4550Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 973m 556Mi 23:14:32 DEBUG --- stderr --- 23:14:32 DEBUG 23:14:32 INFO 23:14:32 INFO [loop_until]: kubectl --namespace=xlou top node 23:14:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:14:32 INFO [loop_until]: OK (rc = 0) 23:14:32 DEBUG --- stdout --- 23:14:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 142m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 133m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7282m 45% 6019Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1837m 11% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7452m 46% 5803Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7934m 49% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2022m 12% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1888m 11% 14375Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1054m 6% 2076Mi 3% 23:14:32 DEBUG --- stderr --- 23:14:32 DEBUG 23:15:32 INFO 23:15:32 INFO [loop_until]: kubectl --namespace=xlou top pods 23:15:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:15:32 INFO [loop_until]: OK (rc = 0) 23:15:32 DEBUG --- stdout --- 23:15:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 86m 5766Mi am-55f77847b7-79tz5 81m 5788Mi am-55f77847b7-c4982 80m 5719Mi ds-cts-0 7m 383Mi ds-cts-1 7m 366Mi ds-cts-2 6m 373Mi ds-idrepo-0 7895m 13825Mi ds-idrepo-1 2041m 13837Mi ds-idrepo-2 1960m 13837Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7155m 4725Mi idm-65858d8c4c-wd2fd 6929m 4571Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 987m 556Mi 23:15:32 DEBUG --- stderr --- 23:15:32 DEBUG 23:15:32 INFO 23:15:32 INFO [loop_until]: kubectl --namespace=xlou top node 23:15:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:15:32 INFO [loop_until]: OK (rc = 0) 23:15:32 DEBUG --- stdout --- 23:15:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 146m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 143m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 140m 0% 6953Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7356m 46% 6043Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1908m 12% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7197m 45% 5826Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8193m 51% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1802m 11% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1956m 12% 14380Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1080m 6% 2077Mi 3% 23:15:32 DEBUG --- stderr --- 23:15:32 DEBUG 23:16:32 INFO 23:16:32 INFO [loop_until]: kubectl --namespace=xlou top pods 23:16:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:16:32 INFO [loop_until]: OK (rc = 0) 23:16:32 DEBUG --- stdout --- 23:16:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 84m 5767Mi am-55f77847b7-79tz5 79m 5788Mi am-55f77847b7-c4982 83m 5719Mi ds-cts-0 7m 382Mi ds-cts-1 6m 366Mi ds-cts-2 10m 373Mi ds-idrepo-0 7994m 13830Mi ds-idrepo-1 1904m 13853Mi ds-idrepo-2 1802m 13843Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7012m 4749Mi idm-65858d8c4c-wd2fd 7211m 4599Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 997m 558Mi 23:16:32 DEBUG --- stderr --- 23:16:32 DEBUG 23:16:32 INFO 23:16:32 INFO [loop_until]: kubectl --namespace=xlou top node 23:16:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:16:32 INFO [loop_until]: OK (rc = 0) 23:16:32 DEBUG --- stdout --- 23:16:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 142m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 136m 0% 6954Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7298m 45% 6065Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1899m 11% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7506m 47% 5852Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7970m 50% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1782m 11% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2000m 12% 14382Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1065m 6% 2079Mi 3% 23:16:32 DEBUG --- stderr --- 23:16:32 DEBUG 23:17:32 INFO 23:17:32 INFO [loop_until]: kubectl --namespace=xlou top pods 23:17:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:17:32 INFO [loop_until]: OK (rc = 0) 23:17:32 DEBUG --- stdout --- 23:17:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 84m 5767Mi am-55f77847b7-79tz5 80m 5788Mi am-55f77847b7-c4982 86m 5719Mi ds-cts-0 5m 382Mi ds-cts-1 6m 366Mi ds-cts-2 8m 373Mi ds-idrepo-0 7826m 13826Mi ds-idrepo-1 1928m 13854Mi ds-idrepo-2 1706m 13849Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7060m 4770Mi idm-65858d8c4c-wd2fd 6876m 4622Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1003m 558Mi 23:17:32 DEBUG --- stderr --- 23:17:32 DEBUG 23:17:32 INFO 23:17:32 INFO [loop_until]: kubectl --namespace=xlou top node 23:17:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:17:32 INFO [loop_until]: OK (rc = 0) 23:17:32 DEBUG --- stdout --- 23:17:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 147m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 147m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 137m 0% 6953Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6947m 43% 6087Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1887m 11% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7294m 45% 5877Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7460m 46% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1982m 12% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1814m 11% 14388Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1074m 6% 2078Mi 3% 23:17:32 DEBUG --- stderr --- 23:17:32 DEBUG 23:18:32 INFO 23:18:32 INFO [loop_until]: kubectl --namespace=xlou top pods 23:18:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:18:32 INFO [loop_until]: OK (rc = 0) 23:18:32 DEBUG --- stdout --- 23:18:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 82m 5767Mi am-55f77847b7-79tz5 78m 5790Mi am-55f77847b7-c4982 80m 5719Mi ds-cts-0 5m 383Mi ds-cts-1 6m 367Mi ds-cts-2 6m 373Mi ds-idrepo-0 7715m 13858Mi ds-idrepo-1 1985m 13840Mi ds-idrepo-2 2044m 13846Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7041m 4797Mi idm-65858d8c4c-wd2fd 7071m 4641Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1002m 558Mi 23:18:32 DEBUG --- stderr --- 23:18:32 DEBUG 23:18:32 INFO 23:18:32 INFO [loop_until]: kubectl --namespace=xlou top node 23:18:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:18:32 INFO [loop_until]: OK (rc = 0) 23:18:32 DEBUG --- stdout --- 23:18:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 144m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 141m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 135m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7277m 45% 6113Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1893m 11% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7370m 46% 5902Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8035m 50% 14398Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2105m 13% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1952m 12% 14375Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1078m 6% 2075Mi 3% 23:18:32 DEBUG --- stderr --- 23:18:32 DEBUG 23:19:32 INFO 23:19:32 INFO [loop_until]: kubectl --namespace=xlou top pods 23:19:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:19:32 INFO [loop_until]: OK (rc = 0) 23:19:32 DEBUG --- stdout --- 23:19:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 86m 5767Mi am-55f77847b7-79tz5 81m 5790Mi am-55f77847b7-c4982 86m 5719Mi ds-cts-0 6m 382Mi ds-cts-1 7m 367Mi ds-cts-2 6m 374Mi ds-idrepo-0 7888m 13856Mi ds-idrepo-1 2459m 13851Mi ds-idrepo-2 1713m 13856Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6755m 4824Mi idm-65858d8c4c-wd2fd 7168m 4674Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 974m 558Mi 23:19:32 DEBUG --- stderr --- 23:19:32 DEBUG 23:19:32 INFO 23:19:32 INFO [loop_until]: kubectl --namespace=xlou top node 23:19:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:19:33 INFO [loop_until]: OK (rc = 0) 23:19:33 DEBUG --- stdout --- 23:19:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 141m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 137m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7168m 45% 6135Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1888m 11% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7432m 46% 5931Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8005m 50% 14414Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1754m 11% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1991m 12% 14388Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1072m 6% 2078Mi 3% 23:19:33 DEBUG --- stderr --- 23:19:33 DEBUG 23:20:32 INFO 23:20:32 INFO [loop_until]: kubectl --namespace=xlou top pods 23:20:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:20:32 INFO [loop_until]: OK (rc = 0) 23:20:32 DEBUG --- stdout --- 23:20:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 85m 5767Mi am-55f77847b7-79tz5 81m 5791Mi am-55f77847b7-c4982 83m 5720Mi ds-cts-0 5m 382Mi ds-cts-1 6m 367Mi ds-cts-2 6m 373Mi ds-idrepo-0 7693m 13860Mi ds-idrepo-1 1740m 13856Mi ds-idrepo-2 1849m 13857Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 6878m 4847Mi idm-65858d8c4c-wd2fd 7271m 4694Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 983m 559Mi 23:20:32 DEBUG --- stderr --- 23:20:32 DEBUG 23:20:33 INFO 23:20:33 INFO [loop_until]: kubectl --namespace=xlou top node 23:20:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:20:33 INFO [loop_until]: OK (rc = 0) 23:20:33 DEBUG --- stdout --- 23:20:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 145m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 145m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 136m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7137m 44% 6164Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1888m 11% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7448m 46% 5955Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7817m 49% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1825m 11% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1964m 12% 14387Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1062m 6% 2079Mi 3% 23:20:33 DEBUG --- stderr --- 23:20:33 DEBUG 23:21:32 INFO 23:21:32 INFO [loop_until]: kubectl --namespace=xlou top pods 23:21:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:21:32 INFO [loop_until]: OK (rc = 0) 23:21:32 DEBUG --- stdout --- 23:21:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 46m 5770Mi am-55f77847b7-79tz5 52m 5791Mi am-55f77847b7-c4982 42m 5720Mi ds-cts-0 5m 383Mi ds-cts-1 7m 367Mi ds-cts-2 7m 374Mi ds-idrepo-0 3675m 13859Mi ds-idrepo-1 1056m 13856Mi ds-idrepo-2 880m 13858Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2126m 4861Mi idm-65858d8c4c-wd2fd 3542m 4713Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 420m 558Mi 23:21:32 DEBUG --- stderr --- 23:21:32 DEBUG 23:21:33 INFO 23:21:33 INFO [loop_until]: kubectl --namespace=xlou top node 23:21:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:21:33 INFO [loop_until]: OK (rc = 0) 23:21:33 DEBUG --- stdout --- 23:21:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1389Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 109m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 92m 0% 6863Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 90m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3869m 24% 6183Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 871m 5% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2818m 17% 5969Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2068m 13% 14425Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 637m 4% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1190m 7% 14386Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 583m 3% 2078Mi 3% 23:21:33 DEBUG --- stderr --- 23:21:33 DEBUG 23:22:33 INFO 23:22:33 INFO [loop_until]: kubectl --namespace=xlou top pods 23:22:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:22:33 INFO [loop_until]: OK (rc = 0) 23:22:33 DEBUG --- stdout --- 23:22:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 8m 5770Mi am-55f77847b7-79tz5 8m 5791Mi am-55f77847b7-c4982 8m 5720Mi ds-cts-0 6m 382Mi ds-cts-1 5m 368Mi ds-cts-2 11m 374Mi ds-idrepo-0 14m 13859Mi ds-idrepo-1 9m 13856Mi ds-idrepo-2 10m 13858Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 9m 4861Mi idm-65858d8c4c-wd2fd 8m 4712Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1m 109Mi 23:22:33 DEBUG --- stderr --- 23:22:33 DEBUG 23:22:33 INFO 23:22:33 INFO [loop_until]: kubectl --namespace=xlou top node 23:22:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:22:33 INFO [loop_until]: OK (rc = 0) 23:22:33 DEBUG --- stdout --- 23:22:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 6181Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 5970Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 56m 0% 14386Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1634Mi 2% 23:22:33 DEBUG --- stderr --- 23:22:33 DEBUG 127.0.0.1 - - [12/Aug/2023 23:23:16] "GET /monitoring/average?start_time=23-08-12_21:52:45&stop_time=23-08-12_22:21:15 HTTP/1.1" 200 - 23:23:33 INFO 23:23:33 INFO [loop_until]: kubectl --namespace=xlou top pods 23:23:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:23:33 INFO [loop_until]: OK (rc = 0) 23:23:33 DEBUG --- stdout --- 23:23:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 8m 5770Mi am-55f77847b7-79tz5 8m 5791Mi am-55f77847b7-c4982 9m 5720Mi ds-cts-0 6m 382Mi ds-cts-1 5m 367Mi ds-cts-2 8m 374Mi ds-idrepo-0 12m 13859Mi ds-idrepo-1 9m 13855Mi ds-idrepo-2 9m 13858Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 9m 4861Mi idm-65858d8c4c-wd2fd 6m 4712Mi lodemon-755c6d9977-9wwrg 1m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 2m 109Mi 23:23:33 DEBUG --- stderr --- 23:23:33 DEBUG 23:23:33 INFO 23:23:33 INFO [loop_until]: kubectl --namespace=xlou top node 23:23:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:23:33 INFO [loop_until]: OK (rc = 0) 23:23:33 DEBUG --- stdout --- 23:23:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 6186Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 5972Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 14410Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14382Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 358m 2% 1898Mi 3% 23:23:33 DEBUG --- stderr --- 23:23:33 DEBUG 23:24:33 INFO 23:24:33 INFO [loop_until]: kubectl --namespace=xlou top pods 23:24:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:24:33 INFO [loop_until]: OK (rc = 0) 23:24:33 DEBUG --- stdout --- 23:24:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 82m 5770Mi am-55f77847b7-79tz5 77m 5791Mi am-55f77847b7-c4982 82m 5720Mi ds-cts-0 8m 383Mi ds-cts-1 6m 367Mi ds-cts-2 6m 374Mi ds-idrepo-0 8452m 13837Mi ds-idrepo-1 2084m 13849Mi ds-idrepo-2 2658m 13832Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4932m 4901Mi idm-65858d8c4c-wd2fd 6519m 4755Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1283m 535Mi 23:24:33 DEBUG --- stderr --- 23:24:33 DEBUG 23:24:33 INFO 23:24:33 INFO [loop_until]: kubectl --namespace=xlou top node 23:24:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:24:33 INFO [loop_until]: OK (rc = 0) 23:24:33 DEBUG --- stdout --- 23:24:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 70m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 144m 0% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 147m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 137m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6396m 40% 6221Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1954m 12% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7273m 45% 6015Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8762m 55% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2671m 16% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2260m 14% 14360Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1390m 8% 2050Mi 3% 23:24:33 DEBUG --- stderr --- 23:24:33 DEBUG 23:25:33 INFO 23:25:33 INFO [loop_until]: kubectl --namespace=xlou top pods 23:25:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:25:33 INFO [loop_until]: OK (rc = 0) 23:25:33 DEBUG --- stdout --- 23:25:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 89m 5771Mi am-55f77847b7-79tz5 80m 5791Mi am-55f77847b7-c4982 85m 5721Mi ds-cts-0 6m 382Mi ds-cts-1 12m 368Mi ds-cts-2 7m 374Mi ds-idrepo-0 9541m 13823Mi ds-idrepo-1 3581m 13833Mi ds-idrepo-2 3224m 13797Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7406m 4936Mi idm-65858d8c4c-wd2fd 7688m 4791Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1135m 540Mi 23:25:33 DEBUG --- stderr --- 23:25:33 DEBUG 23:25:33 INFO 23:25:33 INFO [loop_until]: kubectl --namespace=xlou top node 23:25:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:25:33 INFO [loop_until]: OK (rc = 0) 23:25:33 DEBUG --- stdout --- 23:25:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 148m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 135m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7738m 48% 6254Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2081m 13% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7967m 50% 6044Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9831m 61% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3205m 20% 14345Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3825m 24% 14362Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1176m 7% 2058Mi 3% 23:25:33 DEBUG --- stderr --- 23:25:33 DEBUG 23:26:33 INFO 23:26:33 INFO [loop_until]: kubectl --namespace=xlou top pods 23:26:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:26:33 INFO [loop_until]: OK (rc = 0) 23:26:33 DEBUG --- stdout --- 23:26:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 90m 5771Mi am-55f77847b7-79tz5 85m 5791Mi am-55f77847b7-c4982 86m 5721Mi ds-cts-0 6m 382Mi ds-cts-1 10m 367Mi ds-cts-2 7m 374Mi ds-idrepo-0 8953m 13818Mi ds-idrepo-1 2251m 13860Mi ds-idrepo-2 2275m 13807Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7423m 4966Mi idm-65858d8c4c-wd2fd 7581m 4820Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1109m 544Mi 23:26:33 DEBUG --- stderr --- 23:26:33 DEBUG 23:26:33 INFO 23:26:33 INFO [loop_until]: kubectl --namespace=xlou top node 23:26:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:26:33 INFO [loop_until]: OK (rc = 0) 23:26:33 DEBUG --- stdout --- 23:26:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 152m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 142m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7712m 48% 6287Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2153m 13% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7570m 47% 6079Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8584m 54% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2902m 18% 14367Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2600m 16% 14342Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1193m 7% 2063Mi 3% 23:26:33 DEBUG --- stderr --- 23:26:33 DEBUG 23:27:33 INFO 23:27:33 INFO [loop_until]: kubectl --namespace=xlou top pods 23:27:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:27:33 INFO [loop_until]: OK (rc = 0) 23:27:33 DEBUG --- stdout --- 23:27:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 93m 5771Mi am-55f77847b7-79tz5 88m 5791Mi am-55f77847b7-c4982 90m 5721Mi ds-cts-0 7m 383Mi ds-cts-1 6m 367Mi ds-cts-2 9m 373Mi ds-idrepo-0 8488m 13611Mi ds-idrepo-1 2566m 13633Mi ds-idrepo-2 2187m 13628Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7175m 4996Mi idm-65858d8c4c-wd2fd 7420m 4854Mi lodemon-755c6d9977-9wwrg 2m 65Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1149m 548Mi 23:27:33 DEBUG --- stderr --- 23:27:33 DEBUG 23:27:33 INFO 23:27:33 INFO [loop_until]: kubectl --namespace=xlou top node 23:27:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:27:34 INFO [loop_until]: OK (rc = 0) 23:27:34 DEBUG --- stdout --- 23:27:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 149m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7330m 46% 6323Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2080m 13% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7933m 49% 6110Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8556m 53% 14176Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2191m 13% 14190Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2456m 15% 14166Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1186m 7% 2068Mi 3% 23:27:34 DEBUG --- stderr --- 23:27:34 DEBUG 23:28:33 INFO 23:28:33 INFO [loop_until]: kubectl --namespace=xlou top pods 23:28:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:28:33 INFO [loop_until]: OK (rc = 0) 23:28:33 DEBUG --- stdout --- 23:28:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 89m 5772Mi am-55f77847b7-79tz5 84m 5791Mi am-55f77847b7-c4982 89m 5721Mi ds-cts-0 6m 383Mi ds-cts-1 7m 368Mi ds-cts-2 8m 372Mi ds-idrepo-0 8778m 13792Mi ds-idrepo-1 2260m 13740Mi ds-idrepo-2 2510m 13784Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7408m 5025Mi idm-65858d8c4c-wd2fd 7458m 4881Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1079m 554Mi 23:28:33 DEBUG --- stderr --- 23:28:33 DEBUG 23:28:34 INFO 23:28:34 INFO [loop_until]: kubectl --namespace=xlou top node 23:28:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:28:34 INFO [loop_until]: OK (rc = 0) 23:28:34 DEBUG --- stdout --- 23:28:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 146m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7625m 47% 6345Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2147m 13% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7847m 49% 6138Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1138Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8748m 55% 14326Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2366m 14% 14312Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2386m 15% 14290Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1157m 7% 2073Mi 3% 23:28:34 DEBUG --- stderr --- 23:28:34 DEBUG 23:29:34 INFO 23:29:34 INFO [loop_until]: kubectl --namespace=xlou top pods 23:29:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:29:34 INFO [loop_until]: OK (rc = 0) 23:29:34 DEBUG --- stdout --- 23:29:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 91m 5772Mi am-55f77847b7-79tz5 87m 5791Mi am-55f77847b7-c4982 86m 5721Mi ds-cts-0 6m 384Mi ds-cts-1 6m 367Mi ds-cts-2 8m 372Mi ds-idrepo-0 8903m 13773Mi ds-idrepo-1 2575m 13730Mi ds-idrepo-2 2200m 13831Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7540m 5055Mi idm-65858d8c4c-wd2fd 7889m 4908Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1125m 559Mi 23:29:34 DEBUG --- stderr --- 23:29:34 DEBUG 23:29:34 INFO 23:29:34 INFO [loop_until]: kubectl --namespace=xlou top node 23:29:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:29:34 INFO [loop_until]: OK (rc = 0) 23:29:34 DEBUG --- stdout --- 23:29:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 142m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7719m 48% 6376Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2103m 13% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7983m 50% 6170Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8901m 56% 14326Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2423m 15% 14266Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2419m 15% 14273Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1191m 7% 2079Mi 3% 23:29:34 DEBUG --- stderr --- 23:29:34 DEBUG 23:30:34 INFO 23:30:34 INFO [loop_until]: kubectl --namespace=xlou top pods 23:30:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:30:34 INFO [loop_until]: OK (rc = 0) 23:30:34 DEBUG --- stdout --- 23:30:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 94m 5772Mi am-55f77847b7-79tz5 89m 5791Mi am-55f77847b7-c4982 91m 5721Mi ds-cts-0 6m 384Mi ds-cts-1 7m 368Mi ds-cts-2 8m 372Mi ds-idrepo-0 8816m 13852Mi ds-idrepo-1 2493m 13835Mi ds-idrepo-2 2130m 13829Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7365m 5086Mi idm-65858d8c4c-wd2fd 7414m 4942Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1116m 563Mi 23:30:34 DEBUG --- stderr --- 23:30:34 DEBUG 23:30:34 INFO 23:30:34 INFO [loop_until]: kubectl --namespace=xlou top node 23:30:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:30:34 INFO [loop_until]: OK (rc = 0) 23:30:34 DEBUG --- stdout --- 23:30:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 152m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 149m 0% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7744m 48% 6407Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2191m 13% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8022m 50% 6202Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8687m 54% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2477m 15% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2595m 16% 14301Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1234m 7% 2085Mi 3% 23:30:34 DEBUG --- stderr --- 23:30:34 DEBUG 23:31:34 INFO 23:31:34 INFO [loop_until]: kubectl --namespace=xlou top pods 23:31:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:31:34 INFO [loop_until]: OK (rc = 0) 23:31:34 DEBUG --- stdout --- 23:31:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 91m 5772Mi am-55f77847b7-79tz5 84m 5791Mi am-55f77847b7-c4982 88m 5721Mi ds-cts-0 6m 384Mi ds-cts-1 7m 367Mi ds-cts-2 6m 372Mi ds-idrepo-0 8576m 13837Mi ds-idrepo-1 2411m 13836Mi ds-idrepo-2 2264m 13843Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7430m 5116Mi idm-65858d8c4c-wd2fd 7438m 4973Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1085m 567Mi 23:31:34 DEBUG --- stderr --- 23:31:34 DEBUG 23:31:34 INFO 23:31:34 INFO [loop_until]: kubectl --namespace=xlou top node 23:31:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:31:34 INFO [loop_until]: OK (rc = 0) 23:31:34 DEBUG --- stdout --- 23:31:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 152m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 147m 0% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 141m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7646m 48% 6436Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2082m 13% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7905m 49% 6232Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9185m 57% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2346m 14% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2579m 16% 14404Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1164m 7% 2088Mi 3% 23:31:34 DEBUG --- stderr --- 23:31:34 DEBUG 23:32:34 INFO 23:32:34 INFO [loop_until]: kubectl --namespace=xlou top pods 23:32:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:32:34 INFO [loop_until]: OK (rc = 0) 23:32:34 DEBUG --- stdout --- 23:32:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 92m 5772Mi am-55f77847b7-79tz5 86m 5792Mi am-55f77847b7-c4982 88m 5722Mi ds-cts-0 6m 384Mi ds-cts-1 7m 367Mi ds-cts-2 7m 372Mi ds-idrepo-0 8928m 13837Mi ds-idrepo-1 2605m 13822Mi ds-idrepo-2 2294m 13836Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7440m 5146Mi idm-65858d8c4c-wd2fd 7536m 5004Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1089m 573Mi 23:32:34 DEBUG --- stderr --- 23:32:34 DEBUG 23:32:34 INFO 23:32:34 INFO [loop_until]: kubectl --namespace=xlou top node 23:32:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:32:34 INFO [loop_until]: OK (rc = 0) 23:32:34 DEBUG --- stdout --- 23:32:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 137m 0% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7705m 48% 6463Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2174m 13% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7973m 50% 6263Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8784m 55% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2549m 16% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2323m 14% 14404Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1176m 7% 2090Mi 3% 23:32:34 DEBUG --- stderr --- 23:32:34 DEBUG 23:33:34 INFO 23:33:34 INFO [loop_until]: kubectl --namespace=xlou top pods 23:33:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:33:34 INFO [loop_until]: OK (rc = 0) 23:33:34 DEBUG --- stdout --- 23:33:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 93m 5772Mi am-55f77847b7-79tz5 93m 5792Mi am-55f77847b7-c4982 92m 5722Mi ds-cts-0 5m 384Mi ds-cts-1 7m 369Mi ds-cts-2 7m 374Mi ds-idrepo-0 8686m 13726Mi ds-idrepo-1 2425m 13762Mi ds-idrepo-2 2660m 13762Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7254m 5176Mi idm-65858d8c4c-wd2fd 7508m 5033Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1091m 578Mi 23:33:34 DEBUG --- stderr --- 23:33:34 DEBUG 23:33:34 INFO 23:33:34 INFO [loop_until]: kubectl --namespace=xlou top node 23:33:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:33:34 INFO [loop_until]: OK (rc = 0) 23:33:34 DEBUG --- stdout --- 23:33:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6863Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7696m 48% 6492Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2146m 13% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7870m 49% 6291Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8648m 54% 14310Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2320m 14% 14324Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2333m 14% 14337Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1166m 7% 2095Mi 3% 23:33:34 DEBUG --- stderr --- 23:33:34 DEBUG 23:34:34 INFO 23:34:34 INFO [loop_until]: kubectl --namespace=xlou top pods 23:34:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:34:34 INFO [loop_until]: OK (rc = 0) 23:34:34 DEBUG --- stdout --- 23:34:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 94m 5772Mi am-55f77847b7-79tz5 86m 5793Mi am-55f77847b7-c4982 87m 5722Mi ds-cts-0 7m 384Mi ds-cts-1 8m 369Mi ds-cts-2 9m 373Mi ds-idrepo-0 9137m 13846Mi ds-idrepo-1 2967m 13823Mi ds-idrepo-2 2565m 13823Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7275m 5208Mi idm-65858d8c4c-wd2fd 7628m 5066Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1078m 582Mi 23:34:34 DEBUG --- stderr --- 23:34:34 DEBUG 23:34:34 INFO 23:34:34 INFO [loop_until]: kubectl --namespace=xlou top node 23:34:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:34:34 INFO [loop_until]: OK (rc = 0) 23:34:34 DEBUG --- stdout --- 23:34:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 141m 0% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7486m 47% 6520Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2144m 13% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7722m 48% 6324Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9394m 59% 14410Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3041m 19% 14402Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3154m 19% 14400Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1169m 7% 2097Mi 3% 23:34:34 DEBUG --- stderr --- 23:34:34 DEBUG 23:35:34 INFO 23:35:34 INFO [loop_until]: kubectl --namespace=xlou top pods 23:35:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:35:34 INFO [loop_until]: OK (rc = 0) 23:35:34 DEBUG --- stdout --- 23:35:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 94m 5772Mi am-55f77847b7-79tz5 91m 5792Mi am-55f77847b7-c4982 91m 5722Mi ds-cts-0 6m 384Mi ds-cts-1 7m 368Mi ds-cts-2 8m 373Mi ds-idrepo-0 8269m 13855Mi ds-idrepo-1 2068m 13880Mi ds-idrepo-2 2115m 13858Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7250m 5236Mi idm-65858d8c4c-wd2fd 7543m 5089Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1123m 587Mi 23:35:34 DEBUG --- stderr --- 23:35:34 DEBUG 23:35:35 INFO 23:35:35 INFO [loop_until]: kubectl --namespace=xlou top node 23:35:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:35:35 INFO [loop_until]: OK (rc = 0) 23:35:35 DEBUG --- stdout --- 23:35:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7840m 49% 6551Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2171m 13% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7870m 49% 6354Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8585m 54% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2220m 13% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2362m 14% 14442Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1201m 7% 2104Mi 3% 23:35:35 DEBUG --- stderr --- 23:35:35 DEBUG 23:36:34 INFO 23:36:34 INFO [loop_until]: kubectl --namespace=xlou top pods 23:36:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:36:34 INFO [loop_until]: OK (rc = 0) 23:36:34 DEBUG --- stdout --- 23:36:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 90m 5772Mi am-55f77847b7-79tz5 88m 5792Mi am-55f77847b7-c4982 89m 5722Mi ds-cts-0 6m 384Mi ds-cts-1 7m 368Mi ds-cts-2 11m 374Mi ds-idrepo-0 8433m 13855Mi ds-idrepo-1 2035m 13875Mi ds-idrepo-2 2140m 13861Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7402m 5264Mi idm-65858d8c4c-wd2fd 7798m 5121Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1104m 591Mi 23:36:34 DEBUG --- stderr --- 23:36:34 DEBUG 23:36:35 INFO 23:36:35 INFO [loop_until]: kubectl --namespace=xlou top node 23:36:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:36:35 INFO [loop_until]: OK (rc = 0) 23:36:35 DEBUG --- stdout --- 23:36:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 155m 0% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7673m 48% 6580Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2182m 13% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8011m 50% 6378Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8531m 53% 14444Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2335m 14% 14444Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2361m 14% 14443Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1206m 7% 2106Mi 3% 23:36:35 DEBUG --- stderr --- 23:36:35 DEBUG 23:37:34 INFO 23:37:34 INFO [loop_until]: kubectl --namespace=xlou top pods 23:37:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:37:34 INFO [loop_until]: OK (rc = 0) 23:37:34 DEBUG --- stdout --- 23:37:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 91m 5772Mi am-55f77847b7-79tz5 87m 5792Mi am-55f77847b7-c4982 88m 5723Mi ds-cts-0 6m 384Mi ds-cts-1 7m 368Mi ds-cts-2 8m 373Mi ds-idrepo-0 8516m 13850Mi ds-idrepo-1 2512m 13836Mi ds-idrepo-2 2133m 13840Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7214m 5295Mi idm-65858d8c4c-wd2fd 7381m 5152Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1104m 595Mi 23:37:34 DEBUG --- stderr --- 23:37:34 DEBUG 23:37:35 INFO 23:37:35 INFO [loop_until]: kubectl --namespace=xlou top node 23:37:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:37:35 INFO [loop_until]: OK (rc = 0) 23:37:35 DEBUG --- stdout --- 23:37:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 146m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7699m 48% 6608Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2150m 13% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7545m 47% 6411Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8475m 53% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2108m 13% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2496m 15% 14431Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1189m 7% 2110Mi 3% 23:37:35 DEBUG --- stderr --- 23:37:35 DEBUG 23:38:35 INFO 23:38:35 INFO [loop_until]: kubectl --namespace=xlou top pods 23:38:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:38:35 INFO [loop_until]: OK (rc = 0) 23:38:35 DEBUG --- stdout --- 23:38:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 97m 5772Mi am-55f77847b7-79tz5 88m 5792Mi am-55f77847b7-c4982 90m 5723Mi ds-cts-0 6m 384Mi ds-cts-1 7m 368Mi ds-cts-2 8m 374Mi ds-idrepo-0 8523m 13829Mi ds-idrepo-1 2260m 13856Mi ds-idrepo-2 1773m 13862Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7085m 5322Mi idm-65858d8c4c-wd2fd 7630m 5186Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1104m 600Mi 23:38:35 DEBUG --- stderr --- 23:38:35 DEBUG 23:38:35 INFO 23:38:35 INFO [loop_until]: kubectl --namespace=xlou top node 23:38:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:38:35 INFO [loop_until]: OK (rc = 0) 23:38:35 DEBUG --- stdout --- 23:38:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 166m 1% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7736m 48% 6638Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2152m 13% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7835m 49% 6440Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8573m 53% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1893m 11% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2124m 13% 14438Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1176m 7% 2114Mi 3% 23:38:35 DEBUG --- stderr --- 23:38:35 DEBUG 23:39:35 INFO 23:39:35 INFO [loop_until]: kubectl --namespace=xlou top pods 23:39:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:39:35 INFO [loop_until]: OK (rc = 0) 23:39:35 DEBUG --- stdout --- 23:39:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 94m 5772Mi am-55f77847b7-79tz5 86m 5790Mi am-55f77847b7-c4982 91m 5720Mi ds-cts-0 5m 384Mi ds-cts-1 7m 368Mi ds-cts-2 8m 374Mi ds-idrepo-0 8111m 13858Mi ds-idrepo-1 2334m 13664Mi ds-idrepo-2 2122m 13656Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7431m 5353Mi idm-65858d8c4c-wd2fd 7570m 5215Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1104m 605Mi 23:39:35 DEBUG --- stderr --- 23:39:35 DEBUG 23:39:35 INFO 23:39:35 INFO [loop_until]: kubectl --namespace=xlou top node 23:39:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:39:35 INFO [loop_until]: OK (rc = 0) 23:39:35 DEBUG --- stdout --- 23:39:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 155m 0% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7646m 48% 6668Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2172m 13% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7942m 49% 6469Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7986m 50% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1910m 12% 14229Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2304m 14% 14243Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1186m 7% 2121Mi 3% 23:39:35 DEBUG --- stderr --- 23:39:35 DEBUG 23:40:35 INFO 23:40:35 INFO [loop_until]: kubectl --namespace=xlou top pods 23:40:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:40:35 INFO [loop_until]: OK (rc = 0) 23:40:35 DEBUG --- stdout --- 23:40:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 92m 5772Mi am-55f77847b7-79tz5 93m 5790Mi am-55f77847b7-c4982 87m 5720Mi ds-cts-0 9m 384Mi ds-cts-1 7m 369Mi ds-cts-2 8m 374Mi ds-idrepo-0 8173m 13546Mi ds-idrepo-1 2295m 13480Mi ds-idrepo-2 1909m 13593Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7589m 5382Mi idm-65858d8c4c-wd2fd 7534m 5244Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1111m 609Mi 23:40:35 DEBUG --- stderr --- 23:40:35 DEBUG 23:40:35 INFO 23:40:35 INFO [loop_until]: kubectl --namespace=xlou top node 23:40:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:40:35 INFO [loop_until]: OK (rc = 0) 23:40:35 DEBUG --- stdout --- 23:40:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 145m 0% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7814m 49% 6696Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2097m 13% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7793m 49% 6497Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8203m 51% 14134Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1907m 12% 14166Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2098m 13% 14076Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1170m 7% 2126Mi 3% 23:40:35 DEBUG --- stderr --- 23:40:35 DEBUG 23:41:35 INFO 23:41:35 INFO [loop_until]: kubectl --namespace=xlou top pods 23:41:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:41:35 INFO [loop_until]: OK (rc = 0) 23:41:35 DEBUG --- stdout --- 23:41:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 96m 5772Mi am-55f77847b7-79tz5 86m 5790Mi am-55f77847b7-c4982 89m 5720Mi ds-cts-0 6m 384Mi ds-cts-1 6m 368Mi ds-cts-2 8m 374Mi ds-idrepo-0 8589m 13758Mi ds-idrepo-1 2340m 13584Mi ds-idrepo-2 2111m 13697Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7209m 5412Mi idm-65858d8c4c-wd2fd 7298m 5276Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1108m 614Mi 23:41:35 DEBUG --- stderr --- 23:41:35 DEBUG 23:41:35 INFO 23:41:35 INFO [loop_until]: kubectl --namespace=xlou top node 23:41:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:41:35 INFO [loop_until]: OK (rc = 0) 23:41:35 DEBUG --- stdout --- 23:41:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 149m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 145m 0% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7335m 46% 6732Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2148m 13% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7593m 47% 6533Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8454m 53% 14332Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2070m 13% 14277Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2352m 14% 14168Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1169m 7% 2131Mi 3% 23:41:35 DEBUG --- stderr --- 23:41:35 DEBUG 23:42:35 INFO 23:42:35 INFO [loop_until]: kubectl --namespace=xlou top pods 23:42:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:42:35 INFO [loop_until]: OK (rc = 0) 23:42:35 DEBUG --- stdout --- 23:42:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 91m 5773Mi am-55f77847b7-79tz5 85m 5791Mi am-55f77847b7-c4982 88m 5720Mi ds-cts-0 6m 384Mi ds-cts-1 7m 368Mi ds-cts-2 7m 374Mi ds-idrepo-0 8849m 13728Mi ds-idrepo-1 2514m 13697Mi ds-idrepo-2 2378m 13787Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7495m 5448Mi idm-65858d8c4c-wd2fd 7627m 5310Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1121m 618Mi 23:42:35 DEBUG --- stderr --- 23:42:35 DEBUG 23:42:35 INFO 23:42:35 INFO [loop_until]: kubectl --namespace=xlou top node 23:42:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:42:36 INFO [loop_until]: OK (rc = 0) 23:42:36 DEBUG --- stdout --- 23:42:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 148m 0% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 141m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7940m 49% 6765Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2134m 13% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7888m 49% 6581Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8987m 56% 14336Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2626m 16% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2688m 16% 14287Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1148m 7% 2135Mi 3% 23:42:36 DEBUG --- stderr --- 23:42:36 DEBUG 23:43:35 INFO 23:43:35 INFO [loop_until]: kubectl --namespace=xlou top pods 23:43:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:43:35 INFO [loop_until]: OK (rc = 0) 23:43:35 DEBUG --- stdout --- 23:43:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 92m 5773Mi am-55f77847b7-79tz5 86m 5791Mi am-55f77847b7-c4982 91m 5720Mi ds-cts-0 6m 385Mi ds-cts-1 12m 368Mi ds-cts-2 7m 374Mi ds-idrepo-0 8432m 13825Mi ds-idrepo-1 2306m 13779Mi ds-idrepo-2 2407m 13834Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7190m 5475Mi idm-65858d8c4c-wd2fd 7499m 5349Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1077m 624Mi 23:43:35 DEBUG --- stderr --- 23:43:35 DEBUG 23:43:36 INFO 23:43:36 INFO [loop_until]: kubectl --namespace=xlou top node 23:43:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:43:36 INFO [loop_until]: OK (rc = 0) 23:43:36 DEBUG --- stdout --- 23:43:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 142m 0% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7592m 47% 6796Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2166m 13% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7978m 50% 6601Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8862m 55% 14416Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2203m 13% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2338m 14% 14371Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1173m 7% 2140Mi 3% 23:43:36 DEBUG --- stderr --- 23:43:36 DEBUG 23:44:35 INFO 23:44:35 INFO [loop_until]: kubectl --namespace=xlou top pods 23:44:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:44:35 INFO [loop_until]: OK (rc = 0) 23:44:35 DEBUG --- stdout --- 23:44:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 93m 5773Mi am-55f77847b7-79tz5 85m 5791Mi am-55f77847b7-c4982 91m 5720Mi ds-cts-0 6m 384Mi ds-cts-1 9m 368Mi ds-cts-2 7m 374Mi ds-idrepo-0 8439m 13823Mi ds-idrepo-1 2109m 13823Mi ds-idrepo-2 2097m 13825Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7113m 5508Mi idm-65858d8c4c-wd2fd 7527m 5379Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1093m 628Mi 23:44:35 DEBUG --- stderr --- 23:44:35 DEBUG 23:44:36 INFO 23:44:36 INFO [loop_until]: kubectl --namespace=xlou top node 23:44:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:44:36 INFO [loop_until]: OK (rc = 0) 23:44:36 DEBUG --- stdout --- 23:44:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 147m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7400m 46% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2073m 13% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7805m 49% 6636Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8714m 54% 14417Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2479m 15% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2193m 13% 14410Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1184m 7% 2142Mi 3% 23:44:36 DEBUG --- stderr --- 23:44:36 DEBUG 23:45:35 INFO 23:45:35 INFO [loop_until]: kubectl --namespace=xlou top pods 23:45:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:45:35 INFO [loop_until]: OK (rc = 0) 23:45:35 DEBUG --- stdout --- 23:45:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 91m 5773Mi am-55f77847b7-79tz5 88m 5791Mi am-55f77847b7-c4982 88m 5721Mi ds-cts-0 6m 384Mi ds-cts-1 7m 368Mi ds-cts-2 8m 374Mi ds-idrepo-0 8007m 13817Mi ds-idrepo-1 2344m 13787Mi ds-idrepo-2 2240m 13789Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7354m 5539Mi idm-65858d8c4c-wd2fd 7457m 5398Mi lodemon-755c6d9977-9wwrg 1m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1101m 633Mi 23:45:35 DEBUG --- stderr --- 23:45:35 DEBUG 23:45:36 INFO 23:45:36 INFO [loop_until]: kubectl --namespace=xlou top node 23:45:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:45:36 INFO [loop_until]: OK (rc = 0) 23:45:36 DEBUG --- stdout --- 23:45:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 145m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7669m 48% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2151m 13% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7801m 49% 6656Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8310m 52% 14414Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2444m 15% 14375Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2176m 13% 14380Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1150m 7% 2148Mi 3% 23:45:36 DEBUG --- stderr --- 23:45:36 DEBUG 23:46:35 INFO 23:46:35 INFO [loop_until]: kubectl --namespace=xlou top pods 23:46:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:46:35 INFO [loop_until]: OK (rc = 0) 23:46:35 DEBUG --- stdout --- 23:46:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 92m 5773Mi am-55f77847b7-79tz5 84m 5791Mi am-55f77847b7-c4982 91m 5721Mi ds-cts-0 6m 384Mi ds-cts-1 7m 369Mi ds-cts-2 9m 375Mi ds-idrepo-0 8086m 13854Mi ds-idrepo-1 2702m 13805Mi ds-idrepo-2 2352m 13822Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7324m 5558Mi idm-65858d8c4c-wd2fd 7414m 5398Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1137m 637Mi 23:46:35 DEBUG --- stderr --- 23:46:35 DEBUG 23:46:36 INFO 23:46:36 INFO [loop_until]: kubectl --namespace=xlou top node 23:46:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:46:36 INFO [loop_until]: OK (rc = 0) 23:46:36 DEBUG --- stdout --- 23:46:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7646m 48% 6874Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2080m 13% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7815m 49% 6653Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8407m 52% 14446Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2814m 17% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2810m 17% 14396Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1189m 7% 2148Mi 3% 23:46:36 DEBUG --- stderr --- 23:46:36 DEBUG 23:47:35 INFO 23:47:35 INFO [loop_until]: kubectl --namespace=xlou top pods 23:47:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:47:36 INFO [loop_until]: OK (rc = 0) 23:47:36 DEBUG --- stdout --- 23:47:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 93m 5773Mi am-55f77847b7-79tz5 86m 5791Mi am-55f77847b7-c4982 92m 5721Mi ds-cts-0 6m 384Mi ds-cts-1 7m 368Mi ds-cts-2 8m 375Mi ds-idrepo-0 9682m 13744Mi ds-idrepo-1 2955m 13678Mi ds-idrepo-2 2415m 13646Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7234m 5556Mi idm-65858d8c4c-wd2fd 7432m 5398Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1107m 642Mi 23:47:36 DEBUG --- stderr --- 23:47:36 DEBUG 23:47:36 INFO 23:47:36 INFO [loop_until]: kubectl --namespace=xlou top node 23:47:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:47:36 INFO [loop_until]: OK (rc = 0) 23:47:36 DEBUG --- stdout --- 23:47:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 144m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7664m 48% 6873Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2154m 13% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8005m 50% 6650Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9331m 58% 14315Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2175m 13% 14220Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2733m 17% 14282Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1201m 7% 2155Mi 3% 23:47:36 DEBUG --- stderr --- 23:47:36 DEBUG 23:48:36 INFO 23:48:36 INFO [loop_until]: kubectl --namespace=xlou top pods 23:48:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:48:36 INFO [loop_until]: OK (rc = 0) 23:48:36 DEBUG --- stdout --- 23:48:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 93m 5773Mi am-55f77847b7-79tz5 87m 5791Mi am-55f77847b7-c4982 93m 5721Mi ds-cts-0 11m 382Mi ds-cts-1 6m 368Mi ds-cts-2 7m 374Mi ds-idrepo-0 8388m 13744Mi ds-idrepo-1 2106m 13581Mi ds-idrepo-2 1977m 13583Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7319m 5557Mi idm-65858d8c4c-wd2fd 7476m 5398Mi lodemon-755c6d9977-9wwrg 1m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1101m 646Mi 23:48:36 DEBUG --- stderr --- 23:48:36 DEBUG 23:48:36 INFO 23:48:36 INFO [loop_until]: kubectl --namespace=xlou top node 23:48:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:48:36 INFO [loop_until]: OK (rc = 0) 23:48:36 DEBUG --- stdout --- 23:48:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6873Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6954Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7621m 47% 6873Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2091m 13% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7841m 49% 6656Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8123m 51% 14347Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2100m 13% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2021m 12% 14176Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1192m 7% 2156Mi 3% 23:48:36 DEBUG --- stderr --- 23:48:36 DEBUG 23:49:36 INFO 23:49:36 INFO [loop_until]: kubectl --namespace=xlou top pods 23:49:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:49:36 INFO [loop_until]: OK (rc = 0) 23:49:36 DEBUG --- stdout --- 23:49:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 96m 5773Mi am-55f77847b7-79tz5 87m 5791Mi am-55f77847b7-c4982 90m 5721Mi ds-cts-0 5m 382Mi ds-cts-1 18m 371Mi ds-cts-2 7m 375Mi ds-idrepo-0 8415m 13814Mi ds-idrepo-1 1973m 13666Mi ds-idrepo-2 1795m 13666Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7497m 5557Mi idm-65858d8c4c-wd2fd 7609m 5397Mi lodemon-755c6d9977-9wwrg 4m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1108m 650Mi 23:49:36 DEBUG --- stderr --- 23:49:36 DEBUG 23:49:36 INFO 23:49:36 INFO [loop_until]: kubectl --namespace=xlou top node 23:49:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:49:36 INFO [loop_until]: OK (rc = 0) 23:49:36 DEBUG --- stdout --- 23:49:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7678m 48% 6873Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2142m 13% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7786m 48% 6656Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8326m 52% 14403Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1991m 12% 14246Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2022m 12% 14260Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1184m 7% 2164Mi 3% 23:49:36 DEBUG --- stderr --- 23:49:36 DEBUG 23:50:36 INFO 23:50:36 INFO [loop_until]: kubectl --namespace=xlou top pods 23:50:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:50:36 INFO [loop_until]: OK (rc = 0) 23:50:36 DEBUG --- stdout --- 23:50:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 96m 5773Mi am-55f77847b7-79tz5 91m 5792Mi am-55f77847b7-c4982 92m 5721Mi ds-cts-0 7m 383Mi ds-cts-1 6m 370Mi ds-cts-2 11m 375Mi ds-idrepo-0 8492m 13813Mi ds-idrepo-1 1956m 13823Mi ds-idrepo-2 1995m 13823Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7447m 5557Mi idm-65858d8c4c-wd2fd 7557m 5397Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1134m 654Mi 23:50:36 DEBUG --- stderr --- 23:50:36 DEBUG 23:50:36 INFO 23:50:36 INFO [loop_until]: kubectl --namespace=xlou top node 23:50:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:50:37 INFO [loop_until]: OK (rc = 0) 23:50:37 DEBUG --- stdout --- 23:50:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 162m 1% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7768m 48% 6874Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2121m 13% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7947m 50% 6656Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8621m 54% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2155m 13% 14419Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1864m 11% 14425Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1222m 7% 2166Mi 3% 23:50:37 DEBUG --- stderr --- 23:50:37 DEBUG 23:51:36 INFO 23:51:36 INFO [loop_until]: kubectl --namespace=xlou top pods 23:51:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:51:36 INFO [loop_until]: OK (rc = 0) 23:51:36 DEBUG --- stdout --- 23:51:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 94m 5773Mi am-55f77847b7-79tz5 84m 5791Mi am-55f77847b7-c4982 88m 5721Mi ds-cts-0 9m 384Mi ds-cts-1 7m 370Mi ds-cts-2 20m 373Mi ds-idrepo-0 8428m 13822Mi ds-idrepo-1 1860m 13812Mi ds-idrepo-2 2332m 13804Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7471m 5557Mi idm-65858d8c4c-wd2fd 7527m 5402Mi lodemon-755c6d9977-9wwrg 1m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1135m 653Mi 23:51:36 DEBUG --- stderr --- 23:51:36 DEBUG 23:51:37 INFO 23:51:37 INFO [loop_until]: kubectl --namespace=xlou top node 23:51:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:51:37 INFO [loop_until]: OK (rc = 0) 23:51:37 DEBUG --- stdout --- 23:51:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 144m 0% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7281m 45% 6875Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2153m 13% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7868m 49% 6657Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 71m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8389m 52% 14433Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2767m 17% 14422Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1834m 11% 14426Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1204m 7% 2167Mi 3% 23:51:37 DEBUG --- stderr --- 23:51:37 DEBUG 23:52:36 INFO 23:52:36 INFO [loop_until]: kubectl --namespace=xlou top pods 23:52:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:52:36 INFO [loop_until]: OK (rc = 0) 23:52:36 DEBUG --- stdout --- 23:52:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 95m 5773Mi am-55f77847b7-79tz5 95m 5791Mi am-55f77847b7-c4982 90m 5721Mi ds-cts-0 9m 384Mi ds-cts-1 7m 371Mi ds-cts-2 7m 373Mi ds-idrepo-0 9123m 13796Mi ds-idrepo-1 2706m 13812Mi ds-idrepo-2 1777m 13816Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7410m 5558Mi idm-65858d8c4c-wd2fd 7586m 5402Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1097m 655Mi 23:52:36 DEBUG --- stderr --- 23:52:36 DEBUG 23:52:37 INFO 23:52:37 INFO [loop_until]: kubectl --namespace=xlou top node 23:52:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:52:37 INFO [loop_until]: OK (rc = 0) 23:52:37 DEBUG --- stdout --- 23:52:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 156m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7892m 49% 6874Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2184m 13% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7584m 47% 6657Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9000m 56% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2356m 14% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2608m 16% 14425Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1160m 7% 2167Mi 3% 23:52:37 DEBUG --- stderr --- 23:52:37 DEBUG 23:53:36 INFO 23:53:36 INFO [loop_until]: kubectl --namespace=xlou top pods 23:53:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:53:36 INFO [loop_until]: OK (rc = 0) 23:53:36 DEBUG --- stdout --- 23:53:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 93m 5773Mi am-55f77847b7-79tz5 88m 5791Mi am-55f77847b7-c4982 94m 5721Mi ds-cts-0 6m 384Mi ds-cts-1 6m 371Mi ds-cts-2 7m 373Mi ds-idrepo-0 8379m 13822Mi ds-idrepo-1 2344m 13805Mi ds-idrepo-2 1826m 13814Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7458m 5558Mi idm-65858d8c4c-wd2fd 7660m 5401Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1064m 655Mi 23:53:36 DEBUG --- stderr --- 23:53:36 DEBUG 23:53:37 INFO 23:53:37 INFO [loop_until]: kubectl --namespace=xlou top node 23:53:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:53:37 INFO [loop_until]: OK (rc = 0) 23:53:37 DEBUG --- stdout --- 23:53:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 155m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 145m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7626m 47% 6874Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2179m 13% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7892m 49% 6657Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8715m 54% 14432Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1747m 10% 14423Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2423m 15% 14418Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1150m 7% 2169Mi 3% 23:53:37 DEBUG --- stderr --- 23:53:37 DEBUG 23:54:36 INFO 23:54:36 INFO [loop_until]: kubectl --namespace=xlou top pods 23:54:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:54:36 INFO [loop_until]: OK (rc = 0) 23:54:36 DEBUG --- stdout --- 23:54:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 6m 5773Mi am-55f77847b7-79tz5 6m 5791Mi am-55f77847b7-c4982 6m 5721Mi ds-cts-0 6m 385Mi ds-cts-1 5m 371Mi ds-cts-2 8m 374Mi ds-idrepo-0 58m 13813Mi ds-idrepo-1 100m 13759Mi ds-idrepo-2 529m 13736Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 8m 5557Mi idm-65858d8c4c-wd2fd 8m 5400Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 133m 224Mi 23:54:36 DEBUG --- stderr --- 23:54:36 DEBUG 23:54:37 INFO 23:54:37 INFO [loop_until]: kubectl --namespace=xlou top node 23:54:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:54:37 INFO [loop_until]: OK (rc = 0) 23:54:37 DEBUG --- stdout --- 23:54:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 147m 0% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 6659Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 14427Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 458m 2% 14339Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 129m 0% 14380Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 167m 1% 1745Mi 2% 23:54:37 DEBUG --- stderr --- 23:54:37 DEBUG 23:55:36 INFO 23:55:36 INFO [loop_until]: kubectl --namespace=xlou top pods 23:55:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:55:37 INFO [loop_until]: OK (rc = 0) 23:55:37 DEBUG --- stdout --- 23:55:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 13m 5773Mi am-55f77847b7-79tz5 7m 5791Mi am-55f77847b7-c4982 6m 5721Mi ds-cts-0 6m 385Mi ds-cts-1 5m 371Mi ds-cts-2 7m 374Mi ds-idrepo-0 12m 13812Mi ds-idrepo-1 9m 13759Mi ds-idrepo-2 11m 13735Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 8m 5557Mi idm-65858d8c4c-wd2fd 7m 5400Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1m 224Mi 23:55:37 DEBUG --- stderr --- 23:55:37 DEBUG 23:55:37 INFO 23:55:37 INFO [loop_until]: kubectl --namespace=xlou top node 23:55:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:55:37 INFO [loop_until]: OK (rc = 0) 23:55:37 DEBUG --- stdout --- 23:55:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 6878Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 6658Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14339Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14379Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1748Mi 2% 23:55:37 DEBUG --- stderr --- 23:55:37 DEBUG 127.0.0.1 - - [12/Aug/2023 23:55:47] "GET /monitoring/average?start_time=23-08-12_22:25:16&stop_time=23-08-12_22:53:47 HTTP/1.1" 200 - 23:56:37 INFO 23:56:37 INFO [loop_until]: kubectl --namespace=xlou top pods 23:56:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:56:37 INFO [loop_until]: OK (rc = 0) 23:56:37 DEBUG --- stdout --- 23:56:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 27m 5773Mi am-55f77847b7-79tz5 17m 5792Mi am-55f77847b7-c4982 10m 5721Mi ds-cts-0 7m 385Mi ds-cts-1 7m 372Mi ds-cts-2 8m 374Mi ds-idrepo-0 380m 13844Mi ds-idrepo-1 8m 13760Mi ds-idrepo-2 76m 13749Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 8m 5556Mi idm-65858d8c4c-wd2fd 293m 5412Mi lodemon-755c6d9977-9wwrg 3m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1889m 630Mi 23:56:37 DEBUG --- stderr --- 23:56:37 DEBUG 23:56:37 INFO 23:56:37 INFO [loop_until]: kubectl --namespace=xlou top node 23:56:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:56:37 INFO [loop_until]: OK (rc = 0) 23:56:37 DEBUG --- stdout --- 23:56:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 84m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 161m 1% 6931Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 80m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 236m 1% 6884Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 140m 0% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 452m 2% 6673Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 930m 5% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 772m 4% 14445Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 839m 5% 14446Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2189m 13% 2139Mi 3% 23:56:37 DEBUG --- stderr --- 23:56:37 DEBUG 23:57:37 INFO 23:57:37 INFO [loop_until]: kubectl --namespace=xlou top pods 23:57:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:57:37 INFO [loop_until]: OK (rc = 0) 23:57:37 DEBUG --- stdout --- 23:57:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 74m 5774Mi am-55f77847b7-79tz5 67m 5792Mi am-55f77847b7-c4982 67m 5798Mi ds-cts-0 5m 384Mi ds-cts-1 7m 371Mi ds-cts-2 7m 375Mi ds-idrepo-0 4681m 13845Mi ds-idrepo-1 3238m 13825Mi ds-idrepo-2 3335m 13833Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 1856m 5590Mi idm-65858d8c4c-wd2fd 2097m 5440Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1518m 1237Mi 23:57:37 DEBUG --- stderr --- 23:57:37 DEBUG 23:57:37 INFO 23:57:37 INFO [loop_until]: kubectl --namespace=xlou top node 23:57:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:57:37 INFO [loop_until]: OK (rc = 0) 23:57:37 DEBUG --- stdout --- 23:57:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 129m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 131m 0% 6935Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 127m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2100m 13% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1523m 9% 3011Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 2263m 14% 6683Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 4668m 29% 14465Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3350m 21% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3288m 20% 14435Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1628m 10% 2650Mi 4% 23:57:37 DEBUG --- stderr --- 23:57:37 DEBUG 23:58:37 INFO 23:58:37 INFO [loop_until]: kubectl --namespace=xlou top pods 23:58:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:58:37 INFO [loop_until]: OK (rc = 0) 23:58:37 DEBUG --- stdout --- 23:58:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 91m 5774Mi am-55f77847b7-79tz5 86m 5792Mi am-55f77847b7-c4982 94m 5798Mi ds-cts-0 6m 385Mi ds-cts-1 14m 371Mi ds-cts-2 9m 375Mi ds-idrepo-0 5895m 13823Mi ds-idrepo-1 3521m 13823Mi ds-idrepo-2 3643m 13823Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2754m 5580Mi idm-65858d8c4c-wd2fd 2544m 5435Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1284m 1297Mi 23:58:37 DEBUG --- stderr --- 23:58:37 DEBUG 23:58:38 INFO 23:58:38 INFO [loop_until]: kubectl --namespace=xlou top node 23:58:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:58:38 INFO [loop_until]: OK (rc = 0) 23:58:38 DEBUG --- stdout --- 23:58:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6935Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2997m 18% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1698m 10% 2756Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 2753m 17% 6682Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5993m 37% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3923m 24% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3484m 21% 14446Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1319m 8% 2792Mi 4% 23:58:38 DEBUG --- stderr --- 23:58:38 DEBUG 23:59:37 INFO 23:59:37 INFO [loop_until]: kubectl --namespace=xlou top pods 23:59:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:59:37 INFO [loop_until]: OK (rc = 0) 23:59:37 DEBUG --- stdout --- 23:59:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 111m 5774Mi am-55f77847b7-79tz5 100m 5792Mi am-55f77847b7-c4982 113m 5798Mi ds-cts-0 6m 386Mi ds-cts-1 9m 371Mi ds-cts-2 11m 375Mi ds-idrepo-0 7375m 13822Mi ds-idrepo-1 4863m 13823Mi ds-idrepo-2 4967m 13839Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 3338m 5582Mi idm-65858d8c4c-wd2fd 3516m 5439Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 810m 1342Mi 23:59:37 DEBUG --- stderr --- 23:59:37 DEBUG 23:59:38 INFO 23:59:38 INFO [loop_until]: kubectl --namespace=xlou top node 23:59:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:59:38 INFO [loop_until]: OK (rc = 0) 23:59:38 DEBUG --- stdout --- 23:59:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 174m 1% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 172m 1% 6938Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 160m 1% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3564m 22% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1642m 10% 2702Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3632m 22% 6687Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7209m 45% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5046m 31% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4927m 31% 14447Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 904m 5% 2801Mi 4% 23:59:38 DEBUG --- stderr --- 23:59:38 DEBUG 00:00:37 INFO 00:00:37 INFO [loop_until]: kubectl --namespace=xlou top pods 00:00:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:00:37 INFO [loop_until]: OK (rc = 0) 00:00:37 DEBUG --- stdout --- 00:00:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 115m 5774Mi am-55f77847b7-79tz5 115m 5792Mi am-55f77847b7-c4982 120m 5798Mi ds-cts-0 6m 384Mi ds-cts-1 7m 371Mi ds-cts-2 7m 375Mi ds-idrepo-0 9393m 13824Mi ds-idrepo-1 6281m 13779Mi ds-idrepo-2 6701m 13848Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4218m 5585Mi idm-65858d8c4c-wd2fd 4124m 5441Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1498m 1314Mi 00:00:37 DEBUG --- stderr --- 00:00:37 DEBUG 00:00:38 INFO 00:00:38 INFO [loop_until]: kubectl --namespace=xlou top node 00:00:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:00:38 INFO [loop_until]: OK (rc = 0) 00:00:38 DEBUG --- stdout --- 00:00:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 184m 1% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 196m 1% 6935Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 179m 1% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4670m 29% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1850m 11% 2892Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 4810m 30% 6689Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9563m 60% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6813m 42% 14460Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6595m 41% 14451Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1634m 10% 2800Mi 4% 00:00:38 DEBUG --- stderr --- 00:00:38 DEBUG 00:01:37 INFO 00:01:37 INFO [loop_until]: kubectl --namespace=xlou top pods 00:01:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:01:37 INFO [loop_until]: OK (rc = 0) 00:01:37 DEBUG --- stdout --- 00:01:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 129m 5775Mi am-55f77847b7-79tz5 124m 5792Mi am-55f77847b7-c4982 127m 5798Mi ds-cts-0 6m 385Mi ds-cts-1 8m 371Mi ds-cts-2 8m 376Mi ds-idrepo-0 9063m 13818Mi ds-idrepo-1 7576m 13732Mi ds-idrepo-2 5686m 13869Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4051m 5602Mi idm-65858d8c4c-wd2fd 4261m 5436Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 849m 1307Mi 00:01:37 DEBUG --- stderr --- 00:01:37 DEBUG 00:01:38 INFO 00:01:38 INFO [loop_until]: kubectl --namespace=xlou top node 00:01:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:01:38 INFO [loop_until]: OK (rc = 0) 00:01:38 DEBUG --- stdout --- 00:01:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 188m 1% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 191m 1% 6938Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 174m 1% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4214m 26% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1602m 10% 2232Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4509m 28% 6691Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9147m 57% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6028m 37% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7713m 48% 14414Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 963m 6% 2799Mi 4% 00:01:38 DEBUG --- stderr --- 00:01:38 DEBUG 00:02:37 INFO 00:02:37 INFO [loop_until]: kubectl --namespace=xlou top pods 00:02:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:02:37 INFO [loop_until]: OK (rc = 0) 00:02:37 DEBUG --- stdout --- 00:02:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 122m 5775Mi am-55f77847b7-79tz5 118m 5792Mi am-55f77847b7-c4982 120m 5799Mi ds-cts-0 6m 385Mi ds-cts-1 7m 371Mi ds-cts-2 8m 376Mi ds-idrepo-0 10798m 13828Mi ds-idrepo-1 6123m 13876Mi ds-idrepo-2 5972m 13824Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4210m 5579Mi idm-65858d8c4c-wd2fd 4051m 5434Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 894m 1305Mi 00:02:37 DEBUG --- stderr --- 00:02:37 DEBUG 00:02:38 INFO 00:02:38 INFO [loop_until]: kubectl --namespace=xlou top node 00:02:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:02:38 INFO [loop_until]: OK (rc = 0) 00:02:38 DEBUG --- stdout --- 00:02:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 186m 1% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 177m 1% 6939Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 177m 1% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4502m 28% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1646m 10% 2189Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4657m 29% 6693Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 10066m 63% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5707m 35% 14428Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6202m 39% 14449Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 972m 6% 2800Mi 4% 00:02:38 DEBUG --- stderr --- 00:02:38 DEBUG 00:03:37 INFO 00:03:37 INFO [loop_until]: kubectl --namespace=xlou top pods 00:03:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:03:37 INFO [loop_until]: OK (rc = 0) 00:03:37 DEBUG --- stdout --- 00:03:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 125m 5775Mi am-55f77847b7-79tz5 127m 5792Mi am-55f77847b7-c4982 132m 5800Mi ds-cts-0 5m 384Mi ds-cts-1 7m 372Mi ds-cts-2 7m 375Mi ds-idrepo-0 9174m 13825Mi ds-idrepo-1 5350m 13824Mi ds-idrepo-2 4705m 13824Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4184m 5579Mi idm-65858d8c4c-wd2fd 4373m 5435Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 804m 1308Mi 00:03:37 DEBUG --- stderr --- 00:03:37 DEBUG 00:03:38 INFO 00:03:38 INFO [loop_until]: kubectl --namespace=xlou top node 00:03:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:03:38 INFO [loop_until]: OK (rc = 0) 00:03:38 DEBUG --- stdout --- 00:03:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 189m 1% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 191m 1% 6939Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 179m 1% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4477m 28% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1592m 10% 2201Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4632m 29% 6694Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9512m 59% 14454Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5216m 32% 14418Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5662m 35% 14450Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 921m 5% 2799Mi 4% 00:03:38 DEBUG --- stderr --- 00:03:38 DEBUG 00:04:37 INFO 00:04:37 INFO [loop_until]: kubectl --namespace=xlou top pods 00:04:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:04:37 INFO [loop_until]: OK (rc = 0) 00:04:37 DEBUG --- stdout --- 00:04:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 133m 5775Mi am-55f77847b7-79tz5 126m 5793Mi am-55f77847b7-c4982 131m 5800Mi ds-cts-0 6m 386Mi ds-cts-1 8m 370Mi ds-cts-2 8m 375Mi ds-idrepo-0 9161m 13814Mi ds-idrepo-1 7356m 13762Mi ds-idrepo-2 7189m 13870Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4232m 5579Mi idm-65858d8c4c-wd2fd 4392m 5435Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 856m 1306Mi 00:04:37 DEBUG --- stderr --- 00:04:37 DEBUG 00:04:38 INFO 00:04:38 INFO [loop_until]: kubectl --namespace=xlou top node 00:04:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:04:38 INFO [loop_until]: OK (rc = 0) 00:04:38 DEBUG --- stdout --- 00:04:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 187m 1% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 186m 1% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 183m 1% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4331m 27% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1655m 10% 2187Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4562m 28% 6694Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9168m 57% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 7431m 46% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7384m 46% 14366Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 935m 5% 2800Mi 4% 00:04:38 DEBUG --- stderr --- 00:04:38 DEBUG 00:05:38 INFO 00:05:38 INFO [loop_until]: kubectl --namespace=xlou top pods 00:05:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:05:38 INFO [loop_until]: OK (rc = 0) 00:05:38 DEBUG --- stdout --- 00:05:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 127m 5775Mi am-55f77847b7-79tz5 121m 5793Mi am-55f77847b7-c4982 119m 5801Mi ds-cts-0 6m 384Mi ds-cts-1 7m 369Mi ds-cts-2 8m 375Mi ds-idrepo-0 10293m 13804Mi ds-idrepo-1 5787m 13814Mi ds-idrepo-2 5416m 13807Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4299m 5580Mi idm-65858d8c4c-wd2fd 4379m 5435Mi lodemon-755c6d9977-9wwrg 1m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 909m 1309Mi 00:05:38 DEBUG --- stderr --- 00:05:38 DEBUG 00:05:38 INFO 00:05:38 INFO [loop_until]: kubectl --namespace=xlou top node 00:05:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:05:38 INFO [loop_until]: OK (rc = 0) 00:05:38 DEBUG --- stdout --- 00:05:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 185m 1% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 182m 1% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 171m 1% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4512m 28% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1615m 10% 2224Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4679m 29% 6692Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 10163m 63% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6299m 39% 14375Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6034m 37% 14427Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 983m 6% 2802Mi 4% 00:05:38 DEBUG --- stderr --- 00:05:38 DEBUG 00:06:38 INFO 00:06:38 INFO [loop_until]: kubectl --namespace=xlou top pods 00:06:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:06:38 INFO [loop_until]: OK (rc = 0) 00:06:38 DEBUG --- stdout --- 00:06:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 133m 5775Mi am-55f77847b7-79tz5 120m 5793Mi am-55f77847b7-c4982 114m 5800Mi ds-cts-0 6m 384Mi ds-cts-1 9m 370Mi ds-cts-2 7m 376Mi ds-idrepo-0 9653m 13662Mi ds-idrepo-1 4842m 13823Mi ds-idrepo-2 5200m 13774Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4324m 5581Mi idm-65858d8c4c-wd2fd 4477m 5436Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 888m 1307Mi 00:06:38 DEBUG --- stderr --- 00:06:38 DEBUG 00:06:38 INFO 00:06:38 INFO [loop_until]: kubectl --namespace=xlou top node 00:06:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:06:39 INFO [loop_until]: OK (rc = 0) 00:06:39 DEBUG --- stdout --- 00:06:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 196m 1% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 182m 1% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 183m 1% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4536m 28% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1666m 10% 2191Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4549m 28% 6692Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9771m 61% 14317Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5573m 35% 14350Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5234m 32% 14449Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 969m 6% 2800Mi 4% 00:06:39 DEBUG --- stderr --- 00:06:39 DEBUG 00:07:38 INFO 00:07:38 INFO [loop_until]: kubectl --namespace=xlou top pods 00:07:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:07:38 INFO [loop_until]: OK (rc = 0) 00:07:38 DEBUG --- stdout --- 00:07:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 126m 5775Mi am-55f77847b7-79tz5 124m 5793Mi am-55f77847b7-c4982 125m 5801Mi ds-cts-0 5m 385Mi ds-cts-1 7m 369Mi ds-cts-2 8m 376Mi ds-idrepo-0 7998m 13823Mi ds-idrepo-1 5389m 13826Mi ds-idrepo-2 5446m 13805Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4113m 5580Mi idm-65858d8c4c-wd2fd 4094m 5436Mi lodemon-755c6d9977-9wwrg 5m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 869m 1307Mi 00:07:38 DEBUG --- stderr --- 00:07:38 DEBUG 00:07:39 INFO 00:07:39 INFO [loop_until]: kubectl --namespace=xlou top node 00:07:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:07:39 INFO [loop_until]: OK (rc = 0) 00:07:39 DEBUG --- stdout --- 00:07:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 190m 1% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 187m 1% 6938Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 181m 1% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4381m 27% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1608m 10% 2177Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4570m 28% 6692Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8100m 50% 14434Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5527m 34% 14432Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5661m 35% 14453Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 947m 5% 2800Mi 4% 00:07:39 DEBUG --- stderr --- 00:07:39 DEBUG 00:08:38 INFO 00:08:38 INFO [loop_until]: kubectl --namespace=xlou top pods 00:08:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:08:38 INFO [loop_until]: OK (rc = 0) 00:08:38 DEBUG --- stdout --- 00:08:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 132m 5775Mi am-55f77847b7-79tz5 118m 5793Mi am-55f77847b7-c4982 122m 5801Mi ds-cts-0 5m 385Mi ds-cts-1 7m 369Mi ds-cts-2 7m 375Mi ds-idrepo-0 10182m 13825Mi ds-idrepo-1 7785m 13600Mi ds-idrepo-2 8003m 13824Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4110m 5581Mi idm-65858d8c4c-wd2fd 4254m 5437Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 880m 1309Mi 00:08:38 DEBUG --- stderr --- 00:08:38 DEBUG 00:08:39 INFO 00:08:39 INFO [loop_until]: kubectl --namespace=xlou top node 00:08:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:08:39 INFO [loop_until]: OK (rc = 0) 00:08:39 DEBUG --- stdout --- 00:08:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 183m 1% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 186m 1% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 171m 1% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4275m 26% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1632m 10% 2242Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4376m 27% 6691Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1140Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9538m 60% 14375Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 8042m 50% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7887m 49% 14260Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 944m 5% 2803Mi 4% 00:08:39 DEBUG --- stderr --- 00:08:39 DEBUG 00:09:38 INFO 00:09:38 INFO [loop_until]: kubectl --namespace=xlou top pods 00:09:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:09:38 INFO [loop_until]: OK (rc = 0) 00:09:38 DEBUG --- stdout --- 00:09:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 129m 5775Mi am-55f77847b7-79tz5 120m 5793Mi am-55f77847b7-c4982 119m 5801Mi ds-cts-0 6m 386Mi ds-cts-1 7m 369Mi ds-cts-2 9m 376Mi ds-idrepo-0 8737m 13738Mi ds-idrepo-1 5116m 13822Mi ds-idrepo-2 5771m 13820Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4175m 5581Mi idm-65858d8c4c-wd2fd 4365m 5437Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 883m 1308Mi 00:09:38 DEBUG --- stderr --- 00:09:38 DEBUG 00:09:39 INFO 00:09:39 INFO [loop_until]: kubectl --namespace=xlou top node 00:09:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:09:39 INFO [loop_until]: OK (rc = 0) 00:09:39 DEBUG --- stdout --- 00:09:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 192m 1% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 178m 1% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 178m 1% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4603m 28% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1660m 10% 2187Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4605m 28% 6692Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8685m 54% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5878m 36% 14496Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5862m 36% 14372Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 972m 6% 2802Mi 4% 00:09:39 DEBUG --- stderr --- 00:09:39 DEBUG 00:10:38 INFO 00:10:38 INFO [loop_until]: kubectl --namespace=xlou top pods 00:10:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:10:38 INFO [loop_until]: OK (rc = 0) 00:10:38 DEBUG --- stdout --- 00:10:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 128m 5776Mi am-55f77847b7-79tz5 124m 5793Mi am-55f77847b7-c4982 123m 5801Mi ds-cts-0 6m 384Mi ds-cts-1 7m 370Mi ds-cts-2 7m 375Mi ds-idrepo-0 8333m 13765Mi ds-idrepo-1 5166m 13807Mi ds-idrepo-2 5991m 13784Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4184m 5581Mi idm-65858d8c4c-wd2fd 4254m 5437Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 868m 1308Mi 00:10:38 DEBUG --- stderr --- 00:10:38 DEBUG 00:10:39 INFO 00:10:39 INFO [loop_until]: kubectl --namespace=xlou top node 00:10:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:10:39 INFO [loop_until]: OK (rc = 0) 00:10:39 DEBUG --- stdout --- 00:10:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 190m 1% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 191m 1% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 181m 1% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4382m 27% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1631m 10% 2212Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4354m 27% 6692Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8675m 54% 14334Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5981m 37% 14383Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5194m 32% 14442Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 961m 6% 2801Mi 4% 00:10:39 DEBUG --- stderr --- 00:10:39 DEBUG 00:11:38 INFO 00:11:38 INFO [loop_until]: kubectl --namespace=xlou top pods 00:11:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:11:38 INFO [loop_until]: OK (rc = 0) 00:11:38 DEBUG --- stdout --- 00:11:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 133m 5776Mi am-55f77847b7-79tz5 119m 5793Mi am-55f77847b7-c4982 126m 5801Mi ds-cts-0 5m 384Mi ds-cts-1 7m 370Mi ds-cts-2 7m 376Mi ds-idrepo-0 8851m 13813Mi ds-idrepo-1 5554m 13809Mi ds-idrepo-2 5051m 13753Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4211m 5581Mi idm-65858d8c4c-wd2fd 4241m 5437Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 893m 1309Mi 00:11:38 DEBUG --- stderr --- 00:11:38 DEBUG 00:11:39 INFO 00:11:39 INFO [loop_until]: kubectl --namespace=xlou top node 00:11:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:11:39 INFO [loop_until]: OK (rc = 0) 00:11:39 DEBUG --- stdout --- 00:11:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 182m 1% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 188m 1% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 186m 1% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4405m 27% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1637m 10% 2191Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4537m 28% 6692Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8540m 53% 14417Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5317m 33% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5672m 35% 14426Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 933m 5% 2798Mi 4% 00:11:39 DEBUG --- stderr --- 00:11:39 DEBUG 00:12:38 INFO 00:12:38 INFO [loop_until]: kubectl --namespace=xlou top pods 00:12:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:12:38 INFO [loop_until]: OK (rc = 0) 00:12:38 DEBUG --- stdout --- 00:12:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 128m 5776Mi am-55f77847b7-79tz5 117m 5796Mi am-55f77847b7-c4982 118m 5801Mi ds-cts-0 5m 385Mi ds-cts-1 7m 369Mi ds-cts-2 9m 375Mi ds-idrepo-0 10107m 13767Mi ds-idrepo-1 6635m 13827Mi ds-idrepo-2 7064m 13836Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4263m 5582Mi idm-65858d8c4c-wd2fd 4347m 5438Mi lodemon-755c6d9977-9wwrg 5m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 909m 1308Mi 00:12:38 DEBUG --- stderr --- 00:12:38 DEBUG 00:12:39 INFO 00:12:39 INFO [loop_until]: kubectl --namespace=xlou top node 00:12:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:12:39 INFO [loop_until]: OK (rc = 0) 00:12:39 DEBUG --- stdout --- 00:12:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 187m 1% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 181m 1% 6939Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 176m 1% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4552m 28% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1644m 10% 2185Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4617m 29% 6692Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 11024m 69% 14327Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 7095m 44% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6668m 41% 14439Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 985m 6% 2798Mi 4% 00:12:39 DEBUG --- stderr --- 00:12:39 DEBUG 00:13:38 INFO 00:13:38 INFO [loop_until]: kubectl --namespace=xlou top pods 00:13:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:13:38 INFO [loop_until]: OK (rc = 0) 00:13:38 DEBUG --- stdout --- 00:13:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 128m 5776Mi am-55f77847b7-79tz5 123m 5796Mi am-55f77847b7-c4982 125m 5804Mi ds-cts-0 5m 386Mi ds-cts-1 7m 370Mi ds-cts-2 12m 376Mi ds-idrepo-0 9564m 13779Mi ds-idrepo-1 5801m 13880Mi ds-idrepo-2 6108m 13848Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4334m 5582Mi idm-65858d8c4c-wd2fd 4246m 5438Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 864m 1309Mi 00:13:38 DEBUG --- stderr --- 00:13:38 DEBUG 00:13:39 INFO 00:13:39 INFO [loop_until]: kubectl --namespace=xlou top node 00:13:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:13:39 INFO [loop_until]: OK (rc = 0) 00:13:39 DEBUG --- stdout --- 00:13:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 186m 1% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 188m 1% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 185m 1% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4521m 28% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1653m 10% 2183Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4549m 28% 6694Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9789m 61% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6245m 39% 14417Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5886m 37% 14418Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 967m 6% 2800Mi 4% 00:13:39 DEBUG --- stderr --- 00:13:39 DEBUG 00:14:39 INFO 00:14:39 INFO [loop_until]: kubectl --namespace=xlou top pods 00:14:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:14:39 INFO [loop_until]: OK (rc = 0) 00:14:39 DEBUG --- stdout --- 00:14:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 131m 5776Mi am-55f77847b7-79tz5 123m 5796Mi am-55f77847b7-c4982 133m 5804Mi ds-cts-0 6m 385Mi ds-cts-1 7m 369Mi ds-cts-2 8m 376Mi ds-idrepo-0 9122m 13809Mi ds-idrepo-1 6555m 13801Mi ds-idrepo-2 7206m 13833Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4253m 5582Mi idm-65858d8c4c-wd2fd 4348m 5437Mi lodemon-755c6d9977-9wwrg 7m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 889m 1308Mi 00:14:39 DEBUG --- stderr --- 00:14:39 DEBUG 00:14:39 INFO 00:14:39 INFO [loop_until]: kubectl --namespace=xlou top node 00:14:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:14:39 INFO [loop_until]: OK (rc = 0) 00:14:39 DEBUG --- stdout --- 00:14:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 195m 1% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 202m 1% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 185m 1% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4419m 27% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1636m 10% 2186Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4544m 28% 6694Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9259m 58% 14423Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6580m 41% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6672m 41% 14490Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 926m 5% 2802Mi 4% 00:14:39 DEBUG --- stderr --- 00:14:39 DEBUG 00:15:39 INFO 00:15:39 INFO [loop_until]: kubectl --namespace=xlou top pods 00:15:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:15:39 INFO [loop_until]: OK (rc = 0) 00:15:39 DEBUG --- stdout --- 00:15:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 117m 5776Mi am-55f77847b7-79tz5 124m 5796Mi am-55f77847b7-c4982 115m 5804Mi ds-cts-0 6m 385Mi ds-cts-1 7m 369Mi ds-cts-2 8m 376Mi ds-idrepo-0 10050m 13774Mi ds-idrepo-1 7361m 13702Mi ds-idrepo-2 6216m 13847Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4115m 5583Mi idm-65858d8c4c-wd2fd 4224m 5438Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 894m 1310Mi 00:15:39 DEBUG --- stderr --- 00:15:39 DEBUG 00:15:40 INFO 00:15:40 INFO [loop_until]: kubectl --namespace=xlou top node 00:15:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:15:40 INFO [loop_until]: OK (rc = 0) 00:15:40 DEBUG --- stdout --- 00:15:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 182m 1% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 179m 1% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 181m 1% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4592m 28% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1670m 10% 2208Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4570m 28% 6698Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 10346m 65% 14444Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6216m 39% 14459Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6954m 43% 14356Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 949m 5% 2801Mi 4% 00:15:40 DEBUG --- stderr --- 00:15:40 DEBUG 00:16:39 INFO 00:16:39 INFO [loop_until]: kubectl --namespace=xlou top pods 00:16:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:16:39 INFO [loop_until]: OK (rc = 0) 00:16:39 DEBUG --- stdout --- 00:16:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 130m 5776Mi am-55f77847b7-79tz5 123m 5796Mi am-55f77847b7-c4982 130m 5804Mi ds-cts-0 6m 384Mi ds-cts-1 7m 369Mi ds-cts-2 14m 376Mi ds-idrepo-0 9586m 13819Mi ds-idrepo-1 6571m 13769Mi ds-idrepo-2 5838m 13823Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4111m 5583Mi idm-65858d8c4c-wd2fd 4280m 5439Mi lodemon-755c6d9977-9wwrg 3m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 868m 1309Mi 00:16:39 DEBUG --- stderr --- 00:16:39 DEBUG 00:16:40 INFO 00:16:40 INFO [loop_until]: kubectl --namespace=xlou top node 00:16:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:16:40 INFO [loop_until]: OK (rc = 0) 00:16:40 DEBUG --- stdout --- 00:16:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 194m 1% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 196m 1% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 178m 1% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4537m 28% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1647m 10% 2191Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4616m 29% 6696Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9478m 59% 14436Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5720m 35% 14339Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6417m 40% 14444Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 968m 6% 2802Mi 4% 00:16:40 DEBUG --- stderr --- 00:16:40 DEBUG 00:17:39 INFO 00:17:39 INFO [loop_until]: kubectl --namespace=xlou top pods 00:17:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:17:39 INFO [loop_until]: OK (rc = 0) 00:17:39 DEBUG --- stdout --- 00:17:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 137m 5776Mi am-55f77847b7-79tz5 121m 5797Mi am-55f77847b7-c4982 128m 5804Mi ds-cts-0 5m 385Mi ds-cts-1 7m 369Mi ds-cts-2 9m 376Mi ds-idrepo-0 9269m 13797Mi ds-idrepo-1 5041m 13830Mi ds-idrepo-2 6019m 13823Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4209m 5583Mi idm-65858d8c4c-wd2fd 4239m 5438Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 3Mi overseer-0-c77c496cb-dtn6s 907m 1309Mi 00:17:39 DEBUG --- stderr --- 00:17:39 DEBUG 00:17:40 INFO 00:17:40 INFO [loop_until]: kubectl --namespace=xlou top node 00:17:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:17:40 INFO [loop_until]: OK (rc = 0) 00:17:40 DEBUG --- stdout --- 00:17:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 194m 1% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 193m 1% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 180m 1% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4555m 28% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1645m 10% 2190Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4389m 27% 6699Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9555m 60% 14439Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6351m 39% 14472Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5602m 35% 14451Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 956m 6% 2799Mi 4% 00:17:40 DEBUG --- stderr --- 00:17:40 DEBUG 00:18:39 INFO 00:18:39 INFO [loop_until]: kubectl --namespace=xlou top pods 00:18:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:18:39 INFO [loop_until]: OK (rc = 0) 00:18:39 DEBUG --- stdout --- 00:18:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 131m 5777Mi am-55f77847b7-79tz5 118m 5796Mi am-55f77847b7-c4982 122m 5805Mi ds-cts-0 6m 385Mi ds-cts-1 7m 369Mi ds-cts-2 7m 377Mi ds-idrepo-0 10275m 13713Mi ds-idrepo-1 8804m 13756Mi ds-idrepo-2 6420m 13852Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4312m 5583Mi idm-65858d8c4c-wd2fd 4386m 5439Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 909m 1309Mi 00:18:39 DEBUG --- stderr --- 00:18:39 DEBUG 00:18:40 INFO 00:18:40 INFO [loop_until]: kubectl --namespace=xlou top node 00:18:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:18:40 INFO [loop_until]: OK (rc = 0) 00:18:40 DEBUG --- stdout --- 00:18:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 187m 1% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 186m 1% 6939Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 186m 1% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4551m 28% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1612m 10% 2185Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4739m 29% 6696Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9510m 59% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6756m 42% 14498Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8632m 54% 14340Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 982m 6% 2798Mi 4% 00:18:40 DEBUG --- stderr --- 00:18:40 DEBUG 00:19:39 INFO 00:19:39 INFO [loop_until]: kubectl --namespace=xlou top pods 00:19:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:19:39 INFO [loop_until]: OK (rc = 0) 00:19:39 DEBUG --- stdout --- 00:19:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 132m 5776Mi am-55f77847b7-79tz5 123m 5797Mi am-55f77847b7-c4982 121m 5805Mi ds-cts-0 5m 385Mi ds-cts-1 7m 369Mi ds-cts-2 8m 376Mi ds-idrepo-0 7713m 13824Mi ds-idrepo-1 6127m 13784Mi ds-idrepo-2 7498m 13785Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4214m 5583Mi idm-65858d8c4c-wd2fd 4351m 5439Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 855m 1310Mi 00:19:39 DEBUG --- stderr --- 00:19:39 DEBUG 00:19:40 INFO 00:19:40 INFO [loop_until]: kubectl --namespace=xlou top node 00:19:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:19:40 INFO [loop_until]: OK (rc = 0) 00:19:40 DEBUG --- stdout --- 00:19:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 187m 1% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 191m 1% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 187m 1% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4418m 27% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1648m 10% 2191Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4583m 28% 6695Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9066m 57% 14423Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 7732m 48% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6075m 38% 14437Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 960m 6% 2800Mi 4% 00:19:40 DEBUG --- stderr --- 00:19:40 DEBUG 00:20:39 INFO 00:20:39 INFO [loop_until]: kubectl --namespace=xlou top pods 00:20:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:20:39 INFO [loop_until]: OK (rc = 0) 00:20:39 DEBUG --- stdout --- 00:20:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 131m 5777Mi am-55f77847b7-79tz5 122m 5797Mi am-55f77847b7-c4982 123m 5805Mi ds-cts-0 5m 385Mi ds-cts-1 7m 369Mi ds-cts-2 7m 376Mi ds-idrepo-0 9059m 13693Mi ds-idrepo-1 6458m 13809Mi ds-idrepo-2 5364m 13822Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4149m 5583Mi idm-65858d8c4c-wd2fd 4374m 5439Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 888m 1311Mi 00:20:39 DEBUG --- stderr --- 00:20:39 DEBUG 00:20:40 INFO 00:20:40 INFO [loop_until]: kubectl --namespace=xlou top node 00:20:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:20:40 INFO [loop_until]: OK (rc = 0) 00:20:40 DEBUG --- stdout --- 00:20:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 190m 1% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 188m 1% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 181m 1% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4281m 26% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1595m 10% 2186Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4628m 29% 6695Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9016m 56% 14326Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5382m 33% 14423Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6573m 41% 14416Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 965m 6% 2800Mi 4% 00:20:40 DEBUG --- stderr --- 00:20:40 DEBUG 00:21:39 INFO 00:21:39 INFO [loop_until]: kubectl --namespace=xlou top pods 00:21:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:21:39 INFO [loop_until]: OK (rc = 0) 00:21:39 DEBUG --- stdout --- 00:21:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 121m 5777Mi am-55f77847b7-79tz5 129m 5797Mi am-55f77847b7-c4982 124m 5805Mi ds-cts-0 5m 385Mi ds-cts-1 7m 369Mi ds-cts-2 8m 376Mi ds-idrepo-0 10282m 13823Mi ds-idrepo-1 5771m 13829Mi ds-idrepo-2 5561m 13794Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4174m 5583Mi idm-65858d8c4c-wd2fd 4302m 5521Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 904m 1312Mi 00:21:39 DEBUG --- stderr --- 00:21:39 DEBUG 00:21:40 INFO 00:21:40 INFO [loop_until]: kubectl --namespace=xlou top node 00:21:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:21:40 INFO [loop_until]: OK (rc = 0) 00:21:40 DEBUG --- stdout --- 00:21:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 186m 1% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 190m 1% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 191m 1% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4475m 28% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1649m 10% 2187Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4707m 29% 6779Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 10366m 65% 14426Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5742m 36% 14377Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6301m 39% 14459Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 981m 6% 2800Mi 4% 00:21:40 DEBUG --- stderr --- 00:21:40 DEBUG 00:22:39 INFO 00:22:39 INFO [loop_until]: kubectl --namespace=xlou top pods 00:22:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:22:39 INFO [loop_until]: OK (rc = 0) 00:22:39 DEBUG --- stdout --- 00:22:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 133m 5777Mi am-55f77847b7-79tz5 124m 5797Mi am-55f77847b7-c4982 127m 5805Mi ds-cts-0 6m 385Mi ds-cts-1 7m 371Mi ds-cts-2 7m 376Mi ds-idrepo-0 9042m 13780Mi ds-idrepo-1 7257m 13906Mi ds-idrepo-2 8076m 13826Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4257m 5588Mi idm-65858d8c4c-wd2fd 4402m 5440Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 915m 1311Mi 00:22:39 DEBUG --- stderr --- 00:22:39 DEBUG 00:22:40 INFO 00:22:40 INFO [loop_until]: kubectl --namespace=xlou top node 00:22:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:22:40 INFO [loop_until]: OK (rc = 0) 00:22:40 DEBUG --- stdout --- 00:22:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1379Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 192m 1% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 189m 1% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 175m 1% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4518m 28% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1605m 10% 2204Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4663m 29% 6699Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9193m 57% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 8315m 52% 14404Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7303m 45% 14463Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 984m 6% 2798Mi 4% 00:22:40 DEBUG --- stderr --- 00:22:40 DEBUG 00:23:40 INFO 00:23:40 INFO [loop_until]: kubectl --namespace=xlou top pods 00:23:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:23:40 INFO [loop_until]: OK (rc = 0) 00:23:40 DEBUG --- stdout --- 00:23:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 130m 5777Mi am-55f77847b7-79tz5 125m 5797Mi am-55f77847b7-c4982 119m 5805Mi ds-cts-0 5m 385Mi ds-cts-1 7m 370Mi ds-cts-2 7m 376Mi ds-idrepo-0 9194m 13813Mi ds-idrepo-1 7286m 13650Mi ds-idrepo-2 6862m 13823Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4143m 5589Mi idm-65858d8c4c-wd2fd 4441m 5440Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 913m 1312Mi 00:23:40 DEBUG --- stderr --- 00:23:40 DEBUG 00:23:41 INFO 00:23:41 INFO [loop_until]: kubectl --namespace=xlou top node 00:23:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:23:41 INFO [loop_until]: OK (rc = 0) 00:23:41 DEBUG --- stdout --- 00:23:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 189m 1% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 180m 1% 6955Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 179m 1% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4298m 27% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1638m 10% 2188Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4643m 29% 6698Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9038m 56% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 7918m 49% 14424Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7167m 45% 14313Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 970m 6% 2797Mi 4% 00:23:41 DEBUG --- stderr --- 00:23:41 DEBUG 00:24:40 INFO 00:24:40 INFO [loop_until]: kubectl --namespace=xlou top pods 00:24:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:24:40 INFO [loop_until]: OK (rc = 0) 00:24:40 DEBUG --- stdout --- 00:24:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 127m 5777Mi am-55f77847b7-79tz5 122m 5797Mi am-55f77847b7-c4982 134m 5805Mi ds-cts-0 12m 390Mi ds-cts-1 8m 370Mi ds-cts-2 7m 376Mi ds-idrepo-0 9191m 13822Mi ds-idrepo-1 5819m 13782Mi ds-idrepo-2 5283m 13776Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4327m 5588Mi idm-65858d8c4c-wd2fd 4150m 5439Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 894m 1312Mi 00:24:40 DEBUG --- stderr --- 00:24:40 DEBUG 00:24:41 INFO 00:24:41 INFO [loop_until]: kubectl --namespace=xlou top node 00:24:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:24:41 INFO [loop_until]: OK (rc = 0) 00:24:41 DEBUG --- stdout --- 00:24:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 191m 1% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 185m 1% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 182m 1% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4512m 28% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1637m 10% 2212Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4356m 27% 6694Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8723m 54% 14439Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6012m 37% 14334Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5823m 36% 14402Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 961m 6% 2800Mi 4% 00:24:41 DEBUG --- stderr --- 00:24:41 DEBUG 00:25:40 INFO 00:25:40 INFO [loop_until]: kubectl --namespace=xlou top pods 00:25:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:25:40 INFO [loop_until]: OK (rc = 0) 00:25:40 DEBUG --- stdout --- 00:25:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 127m 5778Mi am-55f77847b7-79tz5 118m 5798Mi am-55f77847b7-c4982 122m 5806Mi ds-cts-0 6m 384Mi ds-cts-1 7m 370Mi ds-cts-2 7m 377Mi ds-idrepo-0 10530m 13683Mi ds-idrepo-1 6594m 13883Mi ds-idrepo-2 5686m 13825Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 4253m 5588Mi idm-65858d8c4c-wd2fd 4356m 5440Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 879m 1314Mi 00:25:40 DEBUG --- stderr --- 00:25:40 DEBUG 00:25:41 INFO 00:25:41 INFO [loop_until]: kubectl --namespace=xlou top node 00:25:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:25:41 INFO [loop_until]: OK (rc = 0) 00:25:41 DEBUG --- stdout --- 00:25:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1379Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 182m 1% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 185m 1% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 176m 1% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4564m 28% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1678m 10% 2228Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4668m 29% 6695Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 10994m 69% 14305Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5758m 36% 14427Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6736m 42% 14448Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 971m 6% 2801Mi 4% 00:25:41 DEBUG --- stderr --- 00:25:41 DEBUG 00:26:40 INFO 00:26:40 INFO [loop_until]: kubectl --namespace=xlou top pods 00:26:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:26:40 INFO [loop_until]: OK (rc = 0) 00:26:40 DEBUG --- stdout --- 00:26:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 94m 5777Mi am-55f77847b7-79tz5 75m 5798Mi am-55f77847b7-c4982 94m 5807Mi ds-cts-0 5m 384Mi ds-cts-1 6m 370Mi ds-cts-2 7m 377Mi ds-idrepo-0 4406m 13790Mi ds-idrepo-1 5452m 13764Mi ds-idrepo-2 4176m 13805Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2565m 5588Mi idm-65858d8c4c-wd2fd 2790m 5440Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 452m 1311Mi 00:26:40 DEBUG --- stderr --- 00:26:40 DEBUG 00:26:41 INFO 00:26:41 INFO [loop_until]: kubectl --namespace=xlou top node 00:26:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:26:41 INFO [loop_until]: OK (rc = 0) 00:26:41 DEBUG --- stdout --- 00:26:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 114m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 127m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 125m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2745m 17% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 854m 5% 2187Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2373m 14% 6695Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 4305m 27% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2919m 18% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4042m 25% 14461Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 599m 3% 2803Mi 4% 00:26:41 DEBUG --- stderr --- 00:26:41 DEBUG 00:27:40 INFO 00:27:40 INFO [loop_until]: kubectl --namespace=xlou top pods 00:27:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:27:40 INFO [loop_until]: OK (rc = 0) 00:27:40 DEBUG --- stdout --- 00:27:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 9m 5777Mi am-55f77847b7-79tz5 6m 5797Mi am-55f77847b7-c4982 8m 5813Mi ds-cts-0 5m 384Mi ds-cts-1 5m 370Mi ds-cts-2 7m 377Mi ds-idrepo-0 190m 13610Mi ds-idrepo-1 9m 13718Mi ds-idrepo-2 10m 13732Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7m 5587Mi idm-65858d8c4c-wd2fd 9m 5439Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 58m 250Mi 00:27:40 DEBUG --- stderr --- 00:27:40 DEBUG 00:27:41 INFO 00:27:41 INFO [loop_until]: kubectl --namespace=xlou top node 00:27:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:27:41 INFO [loop_until]: OK (rc = 0) 00:27:41 DEBUG --- stdout --- 00:27:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 6954Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2186Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 6698Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 245m 1% 14227Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14327Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14342Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 70m 0% 1748Mi 2% 00:27:41 DEBUG --- stderr --- 00:27:41 DEBUG 127.0.0.1 - - [13/Aug/2023 00:28:33] "GET /monitoring/average?start_time=23-08-12_22:57:47&stop_time=23-08-12_23:26:32 HTTP/1.1" 200 - 00:28:40 INFO 00:28:40 INFO [loop_until]: kubectl --namespace=xlou top pods 00:28:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:28:40 INFO [loop_until]: OK (rc = 0) 00:28:40 DEBUG --- stdout --- 00:28:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 10m 5777Mi am-55f77847b7-79tz5 6m 5798Mi am-55f77847b7-c4982 7m 5813Mi ds-cts-0 7m 385Mi ds-cts-1 5m 370Mi ds-cts-2 7m 376Mi ds-idrepo-0 11m 13610Mi ds-idrepo-1 9m 13719Mi ds-idrepo-2 10m 13731Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 5m 5587Mi idm-65858d8c4c-wd2fd 7m 5439Mi lodemon-755c6d9977-9wwrg 1m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1m 250Mi 00:28:40 DEBUG --- stderr --- 00:28:40 DEBUG 00:28:41 INFO 00:28:41 INFO [loop_until]: kubectl --namespace=xlou top node 00:28:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:28:41 INFO [loop_until]: OK (rc = 0) 00:28:41 DEBUG --- stdout --- 00:28:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2178Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 6698Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 14225Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14327Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14338Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 83m 0% 1759Mi 3% 00:28:41 DEBUG --- stderr --- 00:28:41 DEBUG 00:29:40 INFO 00:29:40 INFO [loop_until]: kubectl --namespace=xlou top pods 00:29:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:29:40 INFO [loop_until]: OK (rc = 0) 00:29:40 DEBUG --- stdout --- 00:29:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 182m 5792Mi am-55f77847b7-79tz5 243m 5801Mi am-55f77847b7-c4982 252m 5837Mi ds-cts-0 7m 385Mi ds-cts-1 7m 371Mi ds-cts-2 10m 376Mi ds-idrepo-0 2804m 13725Mi ds-idrepo-1 1428m 13799Mi ds-idrepo-2 2572m 13845Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 583m 5591Mi idm-65858d8c4c-wd2fd 716m 5447Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1060m 842Mi 00:29:40 DEBUG --- stderr --- 00:29:40 DEBUG 00:29:41 INFO 00:29:41 INFO [loop_until]: kubectl --namespace=xlou top node 00:29:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:29:41 INFO [loop_until]: OK (rc = 0) 00:29:41 DEBUG --- stdout --- 00:29:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 246m 1% 6836Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 365m 2% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 307m 1% 6969Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1130m 7% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 792m 4% 2174Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1486m 9% 6701Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3849m 24% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2082m 13% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2142m 13% 14475Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1150m 7% 2337Mi 3% 00:29:41 DEBUG --- stderr --- 00:29:41 DEBUG 00:30:40 INFO 00:30:40 INFO [loop_until]: kubectl --namespace=xlou top pods 00:30:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:30:40 INFO [loop_until]: OK (rc = 0) 00:30:40 DEBUG --- stdout --- 00:30:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 247m 5785Mi am-55f77847b7-79tz5 190m 5801Mi am-55f77847b7-c4982 277m 5831Mi ds-cts-0 5m 385Mi ds-cts-1 7m 370Mi ds-cts-2 7m 377Mi ds-idrepo-0 9654m 13820Mi ds-idrepo-1 4565m 13823Mi ds-idrepo-2 4875m 13818Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2834m 5594Mi idm-65858d8c4c-wd2fd 2951m 5457Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 911m 885Mi 00:30:40 DEBUG --- stderr --- 00:30:40 DEBUG 00:30:41 INFO 00:30:41 INFO [loop_until]: kubectl --namespace=xlou top node 00:30:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:30:41 INFO [loop_until]: OK (rc = 0) 00:30:41 DEBUG --- stdout --- 00:30:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 311m 1% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 254m 1% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 256m 1% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3055m 19% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1643m 10% 2213Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3127m 19% 6716Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9543m 60% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5318m 33% 14445Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4678m 29% 14483Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 966m 6% 2375Mi 4% 00:30:41 DEBUG --- stderr --- 00:30:41 DEBUG 00:31:40 INFO 00:31:40 INFO [loop_until]: kubectl --namespace=xlou top pods 00:31:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:31:40 INFO [loop_until]: OK (rc = 0) 00:31:40 DEBUG --- stdout --- 00:31:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 40m 5785Mi am-55f77847b7-79tz5 39m 5801Mi am-55f77847b7-c4982 39m 5831Mi ds-cts-0 9m 384Mi ds-cts-1 10m 370Mi ds-cts-2 7m 377Mi ds-idrepo-0 9064m 13823Mi ds-idrepo-1 4678m 13825Mi ds-idrepo-2 5006m 13850Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2729m 5597Mi idm-65858d8c4c-wd2fd 2778m 5459Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1053m 950Mi 00:31:40 DEBUG --- stderr --- 00:31:40 DEBUG 00:31:42 INFO 00:31:42 INFO [loop_until]: kubectl --namespace=xlou top node 00:31:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:31:42 INFO [loop_until]: OK (rc = 0) 00:31:42 DEBUG --- stdout --- 00:31:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 101m 0% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6969Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3032m 19% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1664m 10% 2304Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2946m 18% 6714Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9053m 56% 14454Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5047m 31% 14445Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5021m 31% 14465Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1104m 6% 2430Mi 4% 00:31:42 DEBUG --- stderr --- 00:31:42 DEBUG 00:32:40 INFO 00:32:40 INFO [loop_until]: kubectl --namespace=xlou top pods 00:32:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:32:41 INFO [loop_until]: OK (rc = 0) 00:32:41 DEBUG --- stdout --- 00:32:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 38m 5787Mi am-55f77847b7-79tz5 47m 5798Mi am-55f77847b7-c4982 38m 5838Mi ds-cts-0 7m 384Mi ds-cts-1 8m 371Mi ds-cts-2 7m 376Mi ds-idrepo-0 8974m 13841Mi ds-idrepo-1 5162m 13817Mi ds-idrepo-2 5964m 13820Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 1681m 5602Mi idm-65858d8c4c-wd2fd 1804m 5461Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1159m 1039Mi 00:32:41 DEBUG --- stderr --- 00:32:41 DEBUG 00:32:42 INFO 00:32:42 INFO [loop_until]: kubectl --namespace=xlou top node 00:32:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:32:42 INFO [loop_until]: OK (rc = 0) 00:32:42 DEBUG --- stdout --- 00:32:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 122m 0% 6979Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 159m 1% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2316m 14% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1715m 10% 2194Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1481m 9% 6716Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8169m 51% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6705m 42% 14302Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5168m 32% 14436Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1268m 7% 2530Mi 4% 00:32:42 DEBUG --- stderr --- 00:32:42 DEBUG 00:33:41 INFO 00:33:41 INFO [loop_until]: kubectl --namespace=xlou top pods 00:33:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:33:41 INFO [loop_until]: OK (rc = 0) 00:33:41 DEBUG --- stdout --- 00:33:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 134m 5787Mi am-55f77847b7-79tz5 135m 5799Mi am-55f77847b7-c4982 253m 5841Mi ds-cts-0 6m 384Mi ds-cts-1 7m 370Mi ds-cts-2 7m 376Mi ds-idrepo-0 9631m 13801Mi ds-idrepo-1 5315m 13722Mi ds-idrepo-2 5750m 13851Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2942m 5598Mi idm-65858d8c4c-wd2fd 2932m 5457Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 876m 1056Mi 00:33:41 DEBUG --- stderr --- 00:33:41 DEBUG 00:33:42 INFO 00:33:42 INFO [loop_until]: kubectl --namespace=xlou top node 00:33:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:33:42 INFO [loop_until]: OK (rc = 0) 00:33:42 DEBUG --- stdout --- 00:33:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 213m 1% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 335m 2% 6983Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 165m 1% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3188m 20% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1616m 10% 2188Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3170m 19% 6717Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9577m 60% 14434Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5731m 36% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5024m 31% 14457Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 904m 5% 2544Mi 4% 00:33:42 DEBUG --- stderr --- 00:33:42 DEBUG 00:34:41 INFO 00:34:41 INFO [loop_until]: kubectl --namespace=xlou top pods 00:34:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:34:41 INFO [loop_until]: OK (rc = 0) 00:34:41 DEBUG --- stdout --- 00:34:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 204m 5791Mi am-55f77847b7-79tz5 161m 5802Mi am-55f77847b7-c4982 314m 5845Mi ds-cts-0 5m 384Mi ds-cts-1 7m 370Mi ds-cts-2 9m 376Mi ds-idrepo-0 9892m 13848Mi ds-idrepo-1 5156m 13823Mi ds-idrepo-2 5425m 13824Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2805m 5598Mi idm-65858d8c4c-wd2fd 2962m 5457Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 826m 1057Mi 00:34:41 DEBUG --- stderr --- 00:34:41 DEBUG 00:34:42 INFO 00:34:42 INFO [loop_until]: kubectl --namespace=xlou top node 00:34:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:34:42 INFO [loop_until]: OK (rc = 0) 00:34:42 DEBUG --- stdout --- 00:34:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 334m 2% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 359m 2% 6984Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 322m 2% 6972Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3148m 19% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1660m 10% 2194Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3236m 20% 6714Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 10010m 62% 14470Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5228m 32% 14456Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4842m 30% 14452Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 912m 5% 2543Mi 4% 00:34:42 DEBUG --- stderr --- 00:34:42 DEBUG 00:35:41 INFO 00:35:41 INFO [loop_until]: kubectl --namespace=xlou top pods 00:35:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:35:41 INFO [loop_until]: OK (rc = 0) 00:35:41 DEBUG --- stdout --- 00:35:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 204m 5792Mi am-55f77847b7-79tz5 247m 5803Mi am-55f77847b7-c4982 248m 5845Mi ds-cts-0 5m 384Mi ds-cts-1 7m 370Mi ds-cts-2 7m 376Mi ds-idrepo-0 10664m 13784Mi ds-idrepo-1 5629m 13860Mi ds-idrepo-2 6096m 13824Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2807m 5598Mi idm-65858d8c4c-wd2fd 2987m 5458Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 814m 1057Mi 00:35:41 DEBUG --- stderr --- 00:35:41 DEBUG 00:35:42 INFO 00:35:42 INFO [loop_until]: kubectl --namespace=xlou top node 00:35:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:35:42 INFO [loop_until]: OK (rc = 0) 00:35:42 DEBUG --- stdout --- 00:35:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 265m 1% 6833Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 315m 1% 6987Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 313m 1% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3081m 19% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1590m 10% 2195Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3239m 20% 6715Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 10638m 66% 14460Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5418m 34% 14453Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5518m 34% 14409Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 896m 5% 2544Mi 4% 00:35:42 DEBUG --- stderr --- 00:35:42 DEBUG 00:36:41 INFO 00:36:41 INFO [loop_until]: kubectl --namespace=xlou top pods 00:36:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:36:41 INFO [loop_until]: OK (rc = 0) 00:36:41 DEBUG --- stdout --- 00:36:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 40m 5792Mi am-55f77847b7-79tz5 37m 5803Mi am-55f77847b7-c4982 35m 5848Mi ds-cts-0 5m 386Mi ds-cts-1 7m 370Mi ds-cts-2 7m 378Mi ds-idrepo-0 8783m 13636Mi ds-idrepo-1 4402m 13814Mi ds-idrepo-2 5156m 13806Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2639m 5599Mi idm-65858d8c4c-wd2fd 3038m 5458Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 824m 1065Mi 00:36:41 DEBUG --- stderr --- 00:36:41 DEBUG 00:36:42 INFO 00:36:42 INFO [loop_until]: kubectl --namespace=xlou top node 00:36:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:36:42 INFO [loop_until]: OK (rc = 0) 00:36:42 DEBUG --- stdout --- 00:36:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6991Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2897m 18% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1683m 10% 2276Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3052m 19% 6718Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8644m 54% 14260Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5442m 34% 14447Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4676m 29% 14504Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 905m 5% 2544Mi 4% 00:36:42 DEBUG --- stderr --- 00:36:42 DEBUG 00:37:41 INFO 00:37:41 INFO [loop_until]: kubectl --namespace=xlou top pods 00:37:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:37:41 INFO [loop_until]: OK (rc = 0) 00:37:41 DEBUG --- stdout --- 00:37:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 41m 5792Mi am-55f77847b7-79tz5 42m 5802Mi am-55f77847b7-c4982 39m 5848Mi ds-cts-0 5m 384Mi ds-cts-1 7m 370Mi ds-cts-2 12m 382Mi ds-idrepo-0 9532m 13788Mi ds-idrepo-1 5695m 13832Mi ds-idrepo-2 5665m 13822Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2870m 5600Mi idm-65858d8c4c-wd2fd 2925m 5459Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 850m 1062Mi 00:37:41 DEBUG --- stderr --- 00:37:41 DEBUG 00:37:42 INFO 00:37:42 INFO [loop_until]: kubectl --namespace=xlou top node 00:37:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:37:42 INFO [loop_until]: OK (rc = 0) 00:37:42 DEBUG --- stdout --- 00:37:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6992Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 98m 0% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3205m 20% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1656m 10% 2280Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3389m 21% 6716Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9821m 61% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5919m 37% 14311Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5834m 36% 14479Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 970m 6% 2543Mi 4% 00:37:42 DEBUG --- stderr --- 00:37:42 DEBUG 00:38:41 INFO 00:38:41 INFO [loop_until]: kubectl --namespace=xlou top pods 00:38:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:38:41 INFO [loop_until]: OK (rc = 0) 00:38:41 DEBUG --- stdout --- 00:38:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 93m 5792Mi am-55f77847b7-79tz5 78m 5806Mi am-55f77847b7-c4982 37m 5848Mi ds-cts-0 5m 384Mi ds-cts-1 7m 370Mi ds-cts-2 7m 383Mi ds-idrepo-0 9468m 13826Mi ds-idrepo-1 5044m 13805Mi ds-idrepo-2 5260m 13785Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2823m 5598Mi idm-65858d8c4c-wd2fd 2989m 5458Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 829m 1061Mi 00:38:41 DEBUG --- stderr --- 00:38:41 DEBUG 00:38:42 INFO 00:38:42 INFO [loop_until]: kubectl --namespace=xlou top node 00:38:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:38:42 INFO [loop_until]: OK (rc = 0) 00:38:42 DEBUG --- stdout --- 00:38:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 101m 0% 6833Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 154m 0% 6993Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3179m 20% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1675m 10% 2248Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3241m 20% 6718Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9571m 60% 14490Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5347m 33% 14433Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5184m 32% 14464Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 926m 5% 2545Mi 4% 00:38:42 DEBUG --- stderr --- 00:38:42 DEBUG 00:39:41 INFO 00:39:41 INFO [loop_until]: kubectl --namespace=xlou top pods 00:39:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:39:41 INFO [loop_until]: OK (rc = 0) 00:39:41 DEBUG --- stdout --- 00:39:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 75m 5792Mi am-55f77847b7-79tz5 124m 5805Mi am-55f77847b7-c4982 90m 5848Mi ds-cts-0 5m 385Mi ds-cts-1 7m 370Mi ds-cts-2 7m 382Mi ds-idrepo-0 9700m 13810Mi ds-idrepo-1 5491m 13807Mi ds-idrepo-2 5891m 13824Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2861m 5598Mi idm-65858d8c4c-wd2fd 2877m 5458Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 847m 1060Mi 00:39:41 DEBUG --- stderr --- 00:39:41 DEBUG 00:39:42 INFO 00:39:42 INFO [loop_until]: kubectl --namespace=xlou top node 00:39:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:39:43 INFO [loop_until]: OK (rc = 0) 00:39:43 DEBUG --- stdout --- 00:39:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 182m 1% 6833Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 185m 1% 6986Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 197m 1% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3177m 19% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1597m 10% 2191Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3257m 20% 6718Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9956m 62% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6226m 39% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5521m 34% 14406Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 902m 5% 2547Mi 4% 00:39:43 DEBUG --- stderr --- 00:39:43 DEBUG 00:40:41 INFO 00:40:41 INFO [loop_until]: kubectl --namespace=xlou top pods 00:40:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:40:41 INFO [loop_until]: OK (rc = 0) 00:40:41 DEBUG --- stdout --- 00:40:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 37m 5795Mi am-55f77847b7-79tz5 34m 5805Mi am-55f77847b7-c4982 39m 5848Mi ds-cts-0 5m 385Mi ds-cts-1 7m 370Mi ds-cts-2 7m 376Mi ds-idrepo-0 8048m 13736Mi ds-idrepo-1 5703m 13649Mi ds-idrepo-2 5807m 13854Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2611m 5600Mi idm-65858d8c4c-wd2fd 2680m 5459Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 859m 1067Mi 00:40:41 DEBUG --- stderr --- 00:40:41 DEBUG 00:40:43 INFO 00:40:43 INFO [loop_until]: kubectl --namespace=xlou top node 00:40:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:40:43 INFO [loop_until]: OK (rc = 0) 00:40:43 DEBUG --- stdout --- 00:40:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 94m 0% 6834Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6983Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2892m 18% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1693m 10% 2293Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3017m 18% 6719Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 50m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8634m 54% 14454Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5256m 33% 14444Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5898m 37% 14322Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 945m 5% 2546Mi 4% 00:40:43 DEBUG --- stderr --- 00:40:43 DEBUG 00:41:41 INFO 00:41:41 INFO [loop_until]: kubectl --namespace=xlou top pods 00:41:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:41:41 INFO [loop_until]: OK (rc = 0) 00:41:41 DEBUG --- stdout --- 00:41:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 192m 5795Mi am-55f77847b7-79tz5 121m 5805Mi am-55f77847b7-c4982 139m 5848Mi ds-cts-0 5m 385Mi ds-cts-1 7m 370Mi ds-cts-2 6m 376Mi ds-idrepo-0 8864m 13823Mi ds-idrepo-1 5948m 13857Mi ds-idrepo-2 6373m 13814Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2934m 5600Mi idm-65858d8c4c-wd2fd 3073m 5459Mi lodemon-755c6d9977-9wwrg 1m 66Mi login-ui-74d6fb46c-njxbz 1m 3Mi overseer-0-c77c496cb-dtn6s 852m 1060Mi 00:41:41 DEBUG --- stderr --- 00:41:41 DEBUG 00:41:43 INFO 00:41:43 INFO [loop_until]: kubectl --namespace=xlou top node 00:41:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:41:43 INFO [loop_until]: OK (rc = 0) 00:41:43 DEBUG --- stdout --- 00:41:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 245m 1% 6837Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 202m 1% 6989Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 183m 1% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3274m 20% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1661m 10% 2192Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3226m 20% 6716Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9153m 57% 14482Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6225m 39% 14456Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5859m 36% 14445Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 915m 5% 2546Mi 4% 00:41:43 DEBUG --- stderr --- 00:41:43 DEBUG 00:42:41 INFO 00:42:41 INFO [loop_until]: kubectl --namespace=xlou top pods 00:42:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:42:42 INFO [loop_until]: OK (rc = 0) 00:42:42 DEBUG --- stdout --- 00:42:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 25m 5794Mi am-55f77847b7-79tz5 35m 5805Mi am-55f77847b7-c4982 23m 5848Mi ds-cts-0 8m 386Mi ds-cts-1 7m 371Mi ds-cts-2 7m 376Mi ds-idrepo-0 9307m 13640Mi ds-idrepo-1 5219m 13865Mi ds-idrepo-2 5791m 13796Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2576m 5600Mi idm-65858d8c4c-wd2fd 2952m 5459Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 862m 1072Mi 00:42:42 DEBUG --- stderr --- 00:42:42 DEBUG 00:42:43 INFO 00:42:43 INFO [loop_until]: kubectl --namespace=xlou top node 00:42:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:42:43 INFO [loop_until]: OK (rc = 0) 00:42:43 DEBUG --- stdout --- 00:42:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 83m 0% 6837Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 86m 0% 6986Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6978Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2861m 18% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1695m 10% 2347Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 2928m 18% 6717Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9488m 59% 14287Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5992m 37% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5098m 32% 14455Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 918m 5% 2544Mi 4% 00:42:43 DEBUG --- stderr --- 00:42:43 DEBUG 00:43:42 INFO 00:43:42 INFO [loop_until]: kubectl --namespace=xlou top pods 00:43:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:43:42 INFO [loop_until]: OK (rc = 0) 00:43:42 DEBUG --- stdout --- 00:43:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 49m 5794Mi am-55f77847b7-79tz5 54m 5805Mi am-55f77847b7-c4982 34m 5848Mi ds-cts-0 5m 386Mi ds-cts-1 7m 370Mi ds-cts-2 7m 376Mi ds-idrepo-0 5889m 13698Mi ds-idrepo-1 6222m 13755Mi ds-idrepo-2 5735m 13817Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2610m 5602Mi idm-65858d8c4c-wd2fd 1136m 5460Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 934m 1062Mi 00:43:42 DEBUG --- stderr --- 00:43:42 DEBUG 00:43:43 INFO 00:43:43 INFO [loop_until]: kubectl --namespace=xlou top node 00:43:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:43:43 INFO [loop_until]: OK (rc = 0) 00:43:43 DEBUG --- stdout --- 00:43:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 114m 0% 6835Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 100m 0% 6989Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 111m 0% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2075m 13% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1734m 10% 2189Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1280m 8% 6716Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5025m 31% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5930m 37% 14487Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6622m 41% 14380Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 994m 6% 2545Mi 4% 00:43:43 DEBUG --- stderr --- 00:43:43 DEBUG 00:44:42 INFO 00:44:42 INFO [loop_until]: kubectl --namespace=xlou top pods 00:44:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:44:42 INFO [loop_until]: OK (rc = 0) 00:44:42 DEBUG --- stdout --- 00:44:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 217m 5795Mi am-55f77847b7-79tz5 243m 5807Mi am-55f77847b7-c4982 122m 5850Mi ds-cts-0 5m 386Mi ds-cts-1 7m 371Mi ds-cts-2 7m 376Mi ds-idrepo-0 8646m 13822Mi ds-idrepo-1 5343m 13810Mi ds-idrepo-2 5260m 13823Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2856m 5599Mi idm-65858d8c4c-wd2fd 3014m 5458Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 823m 1062Mi 00:44:42 DEBUG --- stderr --- 00:44:42 DEBUG 00:44:43 INFO 00:44:43 INFO [loop_until]: kubectl --namespace=xlou top node 00:44:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:44:43 INFO [loop_until]: OK (rc = 0) 00:44:43 DEBUG --- stdout --- 00:44:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 324m 2% 6835Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 236m 1% 6987Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 348m 2% 6979Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3186m 20% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1672m 10% 2190Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3330m 20% 6711Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9320m 58% 14495Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4629m 29% 14475Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5178m 32% 14492Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 918m 5% 2546Mi 4% 00:44:43 DEBUG --- stderr --- 00:44:43 DEBUG 00:45:42 INFO 00:45:42 INFO [loop_until]: kubectl --namespace=xlou top pods 00:45:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:45:42 INFO [loop_until]: OK (rc = 0) 00:45:42 DEBUG --- stdout --- 00:45:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 142m 5795Mi am-55f77847b7-79tz5 106m 5807Mi am-55f77847b7-c4982 133m 5850Mi ds-cts-0 5m 386Mi ds-cts-1 7m 371Mi ds-cts-2 8m 376Mi ds-idrepo-0 8960m 13677Mi ds-idrepo-1 5765m 13780Mi ds-idrepo-2 5277m 13803Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2818m 5600Mi idm-65858d8c4c-wd2fd 3163m 5459Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 811m 1063Mi 00:45:42 DEBUG --- stderr --- 00:45:42 DEBUG 00:45:43 INFO 00:45:43 INFO [loop_until]: kubectl --namespace=xlou top node 00:45:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:45:43 INFO [loop_until]: OK (rc = 0) 00:45:43 DEBUG --- stdout --- 00:45:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 201m 1% 6835Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 210m 1% 6991Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 171m 1% 6977Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3239m 20% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1682m 10% 2181Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3362m 21% 6713Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 50m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8555m 53% 14349Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5075m 31% 14450Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6082m 38% 14462Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 908m 5% 2548Mi 4% 00:45:43 DEBUG --- stderr --- 00:45:43 DEBUG 00:46:42 INFO 00:46:42 INFO [loop_until]: kubectl --namespace=xlou top pods 00:46:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:46:42 INFO [loop_until]: OK (rc = 0) 00:46:42 DEBUG --- stdout --- 00:46:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 215m 5797Mi am-55f77847b7-79tz5 157m 5807Mi am-55f77847b7-c4982 318m 5850Mi ds-cts-0 5m 386Mi ds-cts-1 7m 371Mi ds-cts-2 7m 376Mi ds-idrepo-0 9022m 13802Mi ds-idrepo-1 5729m 13759Mi ds-idrepo-2 5574m 13715Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2828m 5599Mi idm-65858d8c4c-wd2fd 2892m 5458Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 826m 1066Mi 00:46:42 DEBUG --- stderr --- 00:46:42 DEBUG 00:46:43 INFO 00:46:43 INFO [loop_until]: kubectl --namespace=xlou top node 00:46:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:46:43 INFO [loop_until]: OK (rc = 0) 00:46:43 DEBUG --- stdout --- 00:46:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 261m 1% 6836Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 282m 1% 6990Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 173m 1% 6977Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3078m 19% 6921Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1651m 10% 2222Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3108m 19% 6714Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9176m 57% 14478Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5728m 36% 14460Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5956m 37% 14469Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 879m 5% 2544Mi 4% 00:46:43 DEBUG --- stderr --- 00:46:43 DEBUG 00:47:42 INFO 00:47:42 INFO [loop_until]: kubectl --namespace=xlou top pods 00:47:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:47:42 INFO [loop_until]: OK (rc = 0) 00:47:42 DEBUG --- stdout --- 00:47:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 40m 5797Mi am-55f77847b7-79tz5 37m 5807Mi am-55f77847b7-c4982 42m 5850Mi ds-cts-0 5m 386Mi ds-cts-1 7m 371Mi ds-cts-2 11m 384Mi ds-idrepo-0 9148m 13830Mi ds-idrepo-1 6634m 13793Mi ds-idrepo-2 5474m 13715Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2947m 5601Mi idm-65858d8c4c-wd2fd 3147m 5460Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 920m 1070Mi 00:47:42 DEBUG --- stderr --- 00:47:42 DEBUG 00:47:43 INFO 00:47:43 INFO [loop_until]: kubectl --namespace=xlou top node 00:47:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:47:44 INFO [loop_until]: OK (rc = 0) 00:47:44 DEBUG --- stdout --- 00:47:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6838Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 103m 0% 6990Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6978Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3359m 21% 6921Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1758m 11% 2340Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3407m 21% 6713Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 50m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9559m 60% 14465Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5777m 36% 14414Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6977m 43% 14539Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1009m 6% 2549Mi 4% 00:47:44 DEBUG --- stderr --- 00:47:44 DEBUG 00:48:42 INFO 00:48:42 INFO [loop_until]: kubectl --namespace=xlou top pods 00:48:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:48:42 INFO [loop_until]: OK (rc = 0) 00:48:42 DEBUG --- stdout --- 00:48:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 26m 5797Mi am-55f77847b7-79tz5 37m 5807Mi am-55f77847b7-c4982 44m 5850Mi ds-cts-0 6m 386Mi ds-cts-1 7m 372Mi ds-cts-2 7m 378Mi ds-idrepo-0 9632m 13832Mi ds-idrepo-1 5653m 13788Mi ds-idrepo-2 6387m 13811Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2694m 5602Mi idm-65858d8c4c-wd2fd 2482m 5460Mi lodemon-755c6d9977-9wwrg 4m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 890m 1074Mi 00:48:42 DEBUG --- stderr --- 00:48:42 DEBUG 00:48:44 INFO 00:48:44 INFO [loop_until]: kubectl --namespace=xlou top node 00:48:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:48:44 INFO [loop_until]: OK (rc = 0) 00:48:44 DEBUG --- stdout --- 00:48:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 6838Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 103m 0% 6992Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 99m 0% 6978Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3057m 19% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1753m 11% 2324Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2722m 17% 6715Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1141Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9705m 61% 14509Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6317m 39% 14484Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5839m 36% 14517Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 970m 6% 2548Mi 4% 00:48:44 DEBUG --- stderr --- 00:48:44 DEBUG 00:49:42 INFO 00:49:42 INFO [loop_until]: kubectl --namespace=xlou top pods 00:49:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:49:42 INFO [loop_until]: OK (rc = 0) 00:49:42 DEBUG --- stdout --- 00:49:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 39m 5797Mi am-55f77847b7-79tz5 29m 5807Mi am-55f77847b7-c4982 39m 5850Mi ds-cts-0 5m 386Mi ds-cts-1 7m 371Mi ds-cts-2 7m 377Mi ds-idrepo-0 8584m 13770Mi ds-idrepo-1 4036m 13797Mi ds-idrepo-2 5939m 13709Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2588m 5602Mi idm-65858d8c4c-wd2fd 2640m 5463Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 917m 1084Mi 00:49:42 DEBUG --- stderr --- 00:49:42 DEBUG 00:49:44 INFO 00:49:44 INFO [loop_until]: kubectl --namespace=xlou top node 00:49:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:49:44 INFO [loop_until]: OK (rc = 0) 00:49:44 DEBUG --- stdout --- 00:49:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6836Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 101m 0% 6991Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 89m 0% 6988Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2863m 18% 6917Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1741m 10% 2425Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3032m 19% 6711Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9100m 57% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5593m 35% 14476Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4663m 29% 14534Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 972m 6% 2549Mi 4% 00:49:44 DEBUG --- stderr --- 00:49:44 DEBUG 00:50:42 INFO 00:50:42 INFO [loop_until]: kubectl --namespace=xlou top pods 00:50:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:50:42 INFO [loop_until]: OK (rc = 0) 00:50:42 DEBUG --- stdout --- 00:50:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 6m 5797Mi am-55f77847b7-79tz5 8m 5807Mi am-55f77847b7-c4982 10m 5850Mi ds-cts-0 6m 386Mi ds-cts-1 5m 371Mi ds-cts-2 5m 377Mi ds-idrepo-0 539m 13632Mi ds-idrepo-1 1490m 13485Mi ds-idrepo-2 1552m 13434Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 18m 1228Mi idm-65858d8c4c-wd2fd 19m 1181Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 879m 1068Mi 00:50:42 DEBUG --- stderr --- 00:50:42 DEBUG 00:50:44 INFO 00:50:44 INFO [loop_until]: kubectl --namespace=xlou top node 00:50:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:50:44 INFO [loop_until]: OK (rc = 0) 00:50:44 DEBUG --- stdout --- 00:50:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6838Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 6991Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 6978Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 162m 1% 2685Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 1561m 9% 2193Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 504m 3% 2478Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 694m 4% 14294Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 391m 2% 14094Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1905m 11% 14171Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 947m 5% 2551Mi 4% 00:50:44 DEBUG --- stderr --- 00:50:44 DEBUG 00:51:42 INFO 00:51:42 INFO [loop_until]: kubectl --namespace=xlou top pods 00:51:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:51:42 INFO [loop_until]: OK (rc = 0) 00:51:42 DEBUG --- stdout --- 00:51:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 320m 5797Mi am-55f77847b7-79tz5 367m 5809Mi am-55f77847b7-c4982 333m 5852Mi ds-cts-0 5m 386Mi ds-cts-1 7m 371Mi ds-cts-2 5m 377Mi ds-idrepo-0 9303m 13823Mi ds-idrepo-1 4538m 13823Mi ds-idrepo-2 5970m 13676Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 3167m 3943Mi idm-65858d8c4c-wd2fd 3105m 3878Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 821m 1069Mi 00:51:42 DEBUG --- stderr --- 00:51:42 DEBUG 00:51:44 INFO 00:51:44 INFO [loop_until]: kubectl --namespace=xlou top node 00:51:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:51:44 INFO [loop_until]: OK (rc = 0) 00:51:44 DEBUG --- stdout --- 00:51:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 385m 2% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 397m 2% 6992Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 422m 2% 6979Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3315m 20% 5284Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1659m 10% 2194Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3495m 21% 5155Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9580m 60% 14495Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5816m 36% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4222m 26% 14509Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 912m 5% 2552Mi 4% 00:51:44 DEBUG --- stderr --- 00:51:44 DEBUG 00:52:43 INFO 00:52:43 INFO [loop_until]: kubectl --namespace=xlou top pods 00:52:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:52:43 INFO [loop_until]: OK (rc = 0) 00:52:43 DEBUG --- stdout --- 00:52:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 170m 5797Mi am-55f77847b7-79tz5 149m 5809Mi am-55f77847b7-c4982 161m 5852Mi ds-cts-0 5m 386Mi ds-cts-1 7m 371Mi ds-cts-2 6m 377Mi ds-idrepo-0 10006m 13734Mi ds-idrepo-1 3899m 13824Mi ds-idrepo-2 5154m 13823Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2713m 3966Mi idm-65858d8c4c-wd2fd 2825m 3906Mi lodemon-755c6d9977-9wwrg 6m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 843m 1072Mi 00:52:43 DEBUG --- stderr --- 00:52:43 DEBUG 00:52:44 INFO 00:52:44 INFO [loop_until]: kubectl --namespace=xlou top node 00:52:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:52:44 INFO [loop_until]: OK (rc = 0) 00:52:44 DEBUG --- stdout --- 00:52:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 70m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 220m 1% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 231m 1% 6988Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 205m 1% 6981Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3073m 19% 5315Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1644m 10% 2216Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3144m 19% 5196Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 10320m 64% 14435Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5274m 33% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4657m 29% 14225Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 907m 5% 2553Mi 4% 00:52:44 DEBUG --- stderr --- 00:52:44 DEBUG 00:53:43 INFO 00:53:43 INFO [loop_until]: kubectl --namespace=xlou top pods 00:53:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:53:43 INFO [loop_until]: OK (rc = 0) 00:53:43 DEBUG --- stdout --- 00:53:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 75m 5798Mi am-55f77847b7-79tz5 36m 5809Mi am-55f77847b7-c4982 42m 5852Mi ds-cts-0 5m 386Mi ds-cts-1 7m 371Mi ds-cts-2 6m 378Mi ds-idrepo-0 10866m 13694Mi ds-idrepo-1 5182m 13794Mi ds-idrepo-2 6392m 13823Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2800m 4016Mi idm-65858d8c4c-wd2fd 2938m 3951Mi lodemon-755c6d9977-9wwrg 5m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 833m 1073Mi 00:53:43 DEBUG --- stderr --- 00:53:43 DEBUG 00:53:44 INFO 00:53:44 INFO [loop_until]: kubectl --namespace=xlou top node 00:53:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:53:44 INFO [loop_until]: OK (rc = 0) 00:53:44 DEBUG --- stdout --- 00:53:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 103m 0% 6990Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3242m 20% 5359Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1695m 10% 2253Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3200m 20% 5227Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 49m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 10897m 68% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 7385m 46% 14415Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5401m 33% 14466Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 896m 5% 2549Mi 4% 00:53:44 DEBUG --- stderr --- 00:53:44 DEBUG 00:54:43 INFO 00:54:43 INFO [loop_until]: kubectl --namespace=xlou top pods 00:54:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:54:43 INFO [loop_until]: OK (rc = 0) 00:54:43 DEBUG --- stdout --- 00:54:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 75m 5798Mi am-55f77847b7-79tz5 155m 5809Mi am-55f77847b7-c4982 110m 5852Mi ds-cts-0 5m 387Mi ds-cts-1 6m 371Mi ds-cts-2 6m 377Mi ds-idrepo-0 10203m 13691Mi ds-idrepo-1 6515m 13784Mi ds-idrepo-2 5201m 13688Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2751m 4044Mi idm-65858d8c4c-wd2fd 2793m 3981Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 807m 1072Mi 00:54:43 DEBUG --- stderr --- 00:54:43 DEBUG 00:54:44 INFO 00:54:44 INFO [loop_until]: kubectl --namespace=xlou top node 00:54:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:54:44 INFO [loop_until]: OK (rc = 0) 00:54:44 DEBUG --- stdout --- 00:54:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 129m 0% 6837Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 158m 0% 6990Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 182m 1% 6978Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2985m 18% 5386Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1675m 10% 2200Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3081m 19% 5258Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 10491m 66% 14498Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5199m 32% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6907m 43% 14553Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 891m 5% 2552Mi 4% 00:54:44 DEBUG --- stderr --- 00:54:44 DEBUG 00:55:43 INFO 00:55:43 INFO [loop_until]: kubectl --namespace=xlou top pods 00:55:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:55:43 INFO [loop_until]: OK (rc = 0) 00:55:43 DEBUG --- stdout --- 00:55:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 115m 5798Mi am-55f77847b7-79tz5 97m 5812Mi am-55f77847b7-c4982 113m 5855Mi ds-cts-0 6m 387Mi ds-cts-1 7m 371Mi ds-cts-2 6m 377Mi ds-idrepo-0 10102m 13676Mi ds-idrepo-1 5156m 13653Mi ds-idrepo-2 5280m 13749Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2624m 4072Mi idm-65858d8c4c-wd2fd 3013m 4005Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 812m 1073Mi 00:55:43 DEBUG --- stderr --- 00:55:43 DEBUG 00:55:44 INFO 00:55:44 INFO [loop_until]: kubectl --namespace=xlou top node 00:55:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:55:44 INFO [loop_until]: OK (rc = 0) 00:55:44 DEBUG --- stdout --- 00:55:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1390Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 162m 1% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 180m 1% 6995Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 154m 0% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2947m 18% 5410Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1647m 10% 2237Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2973m 18% 5286Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 10058m 63% 14495Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5062m 31% 14483Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5139m 32% 14346Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 905m 5% 2551Mi 4% 00:55:44 DEBUG --- stderr --- 00:55:44 DEBUG 00:56:43 INFO 00:56:43 INFO [loop_until]: kubectl --namespace=xlou top pods 00:56:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:56:43 INFO [loop_until]: OK (rc = 0) 00:56:43 DEBUG --- stdout --- 00:56:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 177m 5798Mi am-55f77847b7-79tz5 186m 5813Mi am-55f77847b7-c4982 69m 5855Mi ds-cts-0 5m 387Mi ds-cts-1 7m 373Mi ds-cts-2 6m 378Mi ds-idrepo-0 10062m 13757Mi ds-idrepo-1 4534m 13828Mi ds-idrepo-2 4770m 13766Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 2837m 4107Mi idm-65858d8c4c-wd2fd 2879m 4043Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 812m 1073Mi 00:56:43 DEBUG --- stderr --- 00:56:43 DEBUG 00:56:45 INFO 00:56:45 INFO [loop_until]: kubectl --namespace=xlou top node 00:56:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:56:45 INFO [loop_until]: OK (rc = 0) 00:56:45 DEBUG --- stdout --- 00:56:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 245m 1% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 136m 0% 6994Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 235m 1% 6981Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3033m 19% 5448Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1658m 10% 2210Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3107m 19% 5322Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 10113m 63% 14450Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5228m 32% 14496Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4230m 26% 14564Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 870m 5% 2552Mi 4% 00:56:45 DEBUG --- stderr --- 00:56:45 DEBUG 00:57:43 INFO 00:57:43 INFO [loop_until]: kubectl --namespace=xlou top pods 00:57:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:57:43 INFO [loop_until]: OK (rc = 0) 00:57:43 DEBUG --- stdout --- 00:57:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 203m 5798Mi am-55f77847b7-79tz5 42m 5813Mi am-55f77847b7-c4982 97m 5855Mi ds-cts-0 5m 387Mi ds-cts-1 6m 373Mi ds-cts-2 6m 377Mi ds-idrepo-0 6754m 13765Mi ds-idrepo-1 5349m 13754Mi ds-idrepo-2 5423m 13827Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 1932m 4141Mi idm-65858d8c4c-wd2fd 2398m 4124Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 842m 1071Mi 00:57:43 DEBUG --- stderr --- 00:57:43 DEBUG 00:57:45 INFO 00:57:45 INFO [loop_until]: kubectl --namespace=xlou top node 00:57:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:57:45 INFO [loop_until]: OK (rc = 0) 00:57:45 DEBUG --- stdout --- 00:57:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 213m 1% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 160m 1% 6996Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 116m 0% 6982Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2758m 17% 5483Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1678m 10% 2195Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2639m 16% 5404Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 50m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7689m 48% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5901m 37% 14502Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5506m 34% 14389Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 916m 5% 2554Mi 4% 00:57:45 DEBUG --- stderr --- 00:57:45 DEBUG 00:58:43 INFO 00:58:43 INFO [loop_until]: kubectl --namespace=xlou top pods 00:58:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:58:43 INFO [loop_until]: OK (rc = 0) 00:58:43 DEBUG --- stdout --- 00:58:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 10m 5798Mi am-55f77847b7-79tz5 7m 5813Mi am-55f77847b7-c4982 8m 5855Mi ds-cts-0 5m 387Mi ds-cts-1 5m 373Mi ds-cts-2 6m 377Mi ds-idrepo-0 224m 13572Mi ds-idrepo-1 210m 13540Mi ds-idrepo-2 11m 13662Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 515m 4143Mi idm-65858d8c4c-wd2fd 504m 4126Mi lodemon-755c6d9977-9wwrg 6m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 841m 1071Mi 00:58:43 DEBUG --- stderr --- 00:58:43 DEBUG 00:58:45 INFO 00:58:45 INFO [loop_until]: kubectl --namespace=xlou top node 00:58:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:58:45 INFO [loop_until]: OK (rc = 0) 00:58:45 DEBUG --- stdout --- 00:58:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 6837Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 6995Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 647m 4% 5488Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1554m 9% 2193Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 671m 4% 5447Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 272m 1% 14244Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14338Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14231Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 911m 5% 2552Mi 4% 00:58:45 DEBUG --- stderr --- 00:58:45 DEBUG 00:59:43 INFO 00:59:43 INFO [loop_until]: kubectl --namespace=xlou top pods 00:59:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:59:43 INFO [loop_until]: OK (rc = 0) 00:59:43 DEBUG --- stdout --- 00:59:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 11m 5798Mi am-55f77847b7-79tz5 9m 5813Mi am-55f77847b7-c4982 8m 5855Mi ds-cts-0 6m 387Mi ds-cts-1 6m 373Mi ds-cts-2 7m 377Mi ds-idrepo-0 52m 13575Mi ds-idrepo-1 9m 13540Mi ds-idrepo-2 10m 13662Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 128m 4146Mi idm-65858d8c4c-wd2fd 71m 4157Mi lodemon-755c6d9977-9wwrg 1m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 252m 1071Mi 00:59:43 DEBUG --- stderr --- 00:59:43 DEBUG 00:59:45 INFO 00:59:45 INFO [loop_until]: kubectl --namespace=xlou top node 00:59:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:59:45 INFO [loop_until]: OK (rc = 0) 00:59:45 DEBUG --- stdout --- 00:59:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 6838Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 75m 0% 6997Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 146m 0% 5491Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 361m 2% 2191Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 114m 0% 5436Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 75m 0% 14248Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14339Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14231Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 223m 1% 2553Mi 4% 00:59:45 DEBUG --- stderr --- 00:59:45 DEBUG 01:00:43 INFO 01:00:43 INFO [loop_until]: kubectl --namespace=xlou top pods 01:00:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:00:43 INFO [loop_until]: OK (rc = 0) 01:00:43 DEBUG --- stdout --- 01:00:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 11m 5798Mi am-55f77847b7-79tz5 7m 5812Mi am-55f77847b7-c4982 8m 5855Mi ds-cts-0 9m 392Mi ds-cts-1 10m 377Mi ds-cts-2 8m 379Mi ds-idrepo-0 41m 13565Mi ds-idrepo-1 40m 13535Mi ds-idrepo-2 42m 13658Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7m 4146Mi idm-65858d8c4c-wd2fd 8m 4156Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 1m 263Mi 01:00:43 DEBUG --- stderr --- 01:00:43 DEBUG 01:00:45 INFO 01:00:45 INFO [loop_until]: kubectl --namespace=xlou top node 01:00:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:00:45 INFO [loop_until]: OK (rc = 0) 01:00:45 DEBUG --- stdout --- 01:00:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 6838Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6996Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6982Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 5489Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2187Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 77m 0% 5438Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 94m 0% 14257Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 99m 0% 14338Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 89m 0% 14237Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1752Mi 2% 01:00:45 DEBUG --- stderr --- 01:00:45 DEBUG 127.0.0.1 - - [13/Aug/2023 01:01:09] "GET /monitoring/average?start_time=23-08-12_23:30:33&stop_time=23-08-12_23:59:08 HTTP/1.1" 200 - 01:01:44 INFO 01:01:44 INFO [loop_until]: kubectl --namespace=xlou top pods 01:01:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:01:44 INFO [loop_until]: OK (rc = 0) 01:01:44 DEBUG --- stdout --- 01:01:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 4Mi am-55f77847b7-5xs2m 11m 5798Mi am-55f77847b7-79tz5 7m 5813Mi am-55f77847b7-c4982 9m 5855Mi ds-cts-0 5m 392Mi ds-cts-1 7m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 49m 13566Mi ds-idrepo-1 52m 13527Mi ds-idrepo-2 44m 13658Mi end-user-ui-6845bc78c7-pjfkl 1m 3Mi idm-65858d8c4c-n7zrc 7m 4146Mi idm-65858d8c4c-wd2fd 7m 4156Mi lodemon-755c6d9977-9wwrg 4m 67Mi login-ui-74d6fb46c-njxbz 1m 2Mi overseer-0-c77c496cb-dtn6s 2m 263Mi 01:01:44 DEBUG --- stderr --- 01:01:44 DEBUG 01:01:45 INFO 01:01:45 INFO [loop_until]: kubectl --namespace=xlou top node 01:01:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:01:45 INFO [loop_until]: OK (rc = 0) 01:01:45 DEBUG --- stdout --- 01:01:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 6840Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 6993Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 5487Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2191Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 5437Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1138Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 100m 0% 14257Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 94m 0% 14342Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 109m 0% 14235Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 86m 0% 1762Mi 3% 01:01:45 DEBUG --- stderr --- 01:01:45 DEBUG 01:02:44 INFO 01:02:44 INFO [loop_until]: kubectl --namespace=xlou top pods 01:02:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:02:44 INFO [loop_until]: OK (rc = 0) 01:02:44 DEBUG --- stdout --- 01:02:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 5Mi am-55f77847b7-5xs2m 9m 5798Mi am-55f77847b7-79tz5 7m 5812Mi am-55f77847b7-c4982 8m 5878Mi ds-cts-0 5m 392Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 429m 13566Mi ds-idrepo-1 248m 13527Mi ds-idrepo-2 511m 13665Mi end-user-ui-6845bc78c7-pjfkl 1m 5Mi idm-65858d8c4c-n7zrc 7m 4145Mi idm-65858d8c4c-wd2fd 7m 4157Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 3Mi overseer-0-c77c496cb-dtn6s 1033m 652Mi 01:02:44 DEBUG --- stderr --- 01:02:44 DEBUG 01:02:45 INFO 01:02:45 INFO [loop_until]: kubectl --namespace=xlou top node 01:02:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:02:45 INFO [loop_until]: OK (rc = 0) 01:02:45 DEBUG --- stdout --- 01:02:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 6849Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 7016Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6982Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 5488Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 137m 0% 2180Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 5439Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 50m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 467m 2% 14257Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 544m 3% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 268m 1% 14243Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1079m 6% 2166Mi 3% 01:02:45 DEBUG --- stderr --- 01:02:45 DEBUG 01:03:44 INFO 01:03:44 INFO [loop_until]: kubectl --namespace=xlou top pods 01:03:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:03:44 INFO [loop_until]: OK (rc = 0) 01:03:44 DEBUG --- stdout --- 01:03:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 5Mi am-55f77847b7-5xs2m 10m 5798Mi am-55f77847b7-79tz5 7m 5813Mi am-55f77847b7-c4982 8m 5877Mi ds-cts-0 5m 393Mi ds-cts-1 5m 378Mi ds-cts-2 6m 380Mi ds-idrepo-0 52m 13566Mi ds-idrepo-1 48m 13527Mi ds-idrepo-2 44m 13666Mi end-user-ui-6845bc78c7-pjfkl 1m 5Mi idm-65858d8c4c-n7zrc 7m 4146Mi idm-65858d8c4c-wd2fd 7m 4156Mi lodemon-755c6d9977-9wwrg 5m 66Mi login-ui-74d6fb46c-njxbz 1m 3Mi overseer-0-c77c496cb-dtn6s 700m 807Mi 01:03:44 DEBUG --- stderr --- 01:03:44 DEBUG 01:03:45 INFO 01:03:45 INFO [loop_until]: kubectl --namespace=xlou top node 01:03:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:03:45 INFO [loop_until]: OK (rc = 0) 01:03:45 DEBUG --- stdout --- 01:03:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 7018Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 6983Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 82m 0% 5502Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 132m 0% 2178Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 5438Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1140Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 100m 0% 14259Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 98m 0% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 97m 0% 14238Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 728m 4% 2283Mi 3% 01:03:45 DEBUG --- stderr --- 01:03:45 DEBUG 01:04:44 INFO 01:04:44 INFO [loop_until]: kubectl --namespace=xlou top pods 01:04:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:04:44 INFO [loop_until]: OK (rc = 0) 01:04:44 DEBUG --- stdout --- 01:04:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 5Mi am-55f77847b7-5xs2m 9m 5798Mi am-55f77847b7-79tz5 8m 5812Mi am-55f77847b7-c4982 9m 5877Mi ds-cts-0 5m 393Mi ds-cts-1 5m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 53m 13568Mi ds-idrepo-1 44m 13528Mi ds-idrepo-2 44m 13666Mi end-user-ui-6845bc78c7-pjfkl 1m 5Mi idm-65858d8c4c-n7zrc 7m 4146Mi idm-65858d8c4c-wd2fd 7m 4156Mi lodemon-755c6d9977-9wwrg 7m 66Mi login-ui-74d6fb46c-njxbz 1m 3Mi overseer-0-c77c496cb-dtn6s 1018m 988Mi 01:04:44 DEBUG --- stderr --- 01:04:44 DEBUG 01:04:46 INFO 01:04:46 INFO [loop_until]: kubectl --namespace=xlou top node 01:04:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:04:46 INFO [loop_until]: OK (rc = 0) 01:04:46 DEBUG --- stdout --- 01:04:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 70m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 6837Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 7014Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 6983Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 5484Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2180Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 5434Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1138Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 99m 0% 14258Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 94m 0% 14352Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 97m 0% 14238Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1080m 6% 2513Mi 4% 01:04:46 DEBUG --- stderr --- 01:04:46 DEBUG 01:05:44 INFO 01:05:44 INFO [loop_until]: kubectl --namespace=xlou top pods 01:05:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:05:44 INFO [loop_until]: OK (rc = 0) 01:05:44 DEBUG --- stdout --- 01:05:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 5Mi am-55f77847b7-5xs2m 10m 5798Mi am-55f77847b7-79tz5 8m 5813Mi am-55f77847b7-c4982 9m 5877Mi ds-cts-0 5m 392Mi ds-cts-1 9m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 48m 13568Mi ds-idrepo-1 44m 13527Mi ds-idrepo-2 44m 13666Mi end-user-ui-6845bc78c7-pjfkl 1m 5Mi idm-65858d8c4c-n7zrc 7m 4146Mi idm-65858d8c4c-wd2fd 7m 4155Mi lodemon-755c6d9977-9wwrg 2m 66Mi login-ui-74d6fb46c-njxbz 1m 3Mi overseer-0-c77c496cb-dtn6s 942m 1167Mi 01:05:44 DEBUG --- stderr --- 01:05:44 DEBUG 01:05:46 INFO 01:05:46 INFO [loop_until]: kubectl --namespace=xlou top node 01:05:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:05:46 INFO [loop_until]: OK (rc = 0) 01:05:46 DEBUG --- stdout --- 01:05:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 7014Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 6982Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 5483Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2181Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 5439Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1139Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 96m 0% 14262Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 94m 0% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 96m 0% 14235Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1228m 7% 2662Mi 4% 01:05:46 DEBUG --- stderr --- 01:05:46 DEBUG 01:06:44 INFO 01:06:44 INFO [loop_until]: kubectl --namespace=xlou top pods 01:06:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:06:44 INFO [loop_until]: OK (rc = 0) 01:06:44 DEBUG --- stdout --- 01:06:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-ctqf8 1m 5Mi am-55f77847b7-5xs2m 9m 5798Mi am-55f77847b7-79tz5 9m 5813Mi am-55f77847b7-c4982 8m 5877Mi ds-cts-0 6m 393Mi ds-cts-1 7m 377Mi ds-cts-2 7m 380Mi ds-idrepo-0 49m 13568Mi ds-idrepo-1 44m 13527Mi ds-idrepo-2 42m 13666Mi end-user-ui-6845bc78c7-pjfkl 1m 5Mi idm-65858d8c4c-n7zrc 7m 4146Mi idm-65858d8c4c-wd2fd 7m 4155Mi lodemon-755c6d9977-9wwrg 5m 66Mi login-ui-74d6fb46c-njxbz 1m 3Mi overseer-0-c77c496cb-dtn6s 626m 1318Mi 01:06:44 DEBUG --- stderr --- 01:06:44 DEBUG 01:06:46 INFO 01:06:46 INFO [loop_until]: kubectl --namespace=xlou top node 01:06:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:06:46 INFO [loop_until]: OK (rc = 0) 01:06:46 DEBUG --- stdout --- 01:06:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 6842Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 7017Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 6981Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 80m 0% 5483Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 131m 0% 2184Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 5437Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1138Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 97m 0% 14259Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 90m 0% 14357Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 90m 0% 14235Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 658m 4% 2800Mi 4% 01:06:46 DEBUG --- stderr --- 01:06:46 DEBUG 01:07:36 INFO Finished: True 01:07:36 INFO Waiting for threads to register finish flag 01:07:46 INFO Done. Have a nice day! :) 127.0.0.1 - - [13/Aug/2023 01:07:46] "GET /monitoring/stop HTTP/1.1" 200 - 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/Cpu_cores_used_per_pod.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/Memory_usage_per_pod.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/Disk_tps_read_per_pod.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/Disk_tps_writes_per_pod.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/Cpu_cores_used_per_node.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/Memory_usage_used_per_node.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/Cpu_iowait_per_node.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/Network_receive_per_node.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/Network_transmit_per_node.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/am_cts_task_count_token_session.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/am_authentication_rate.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/ds_db_cache_misses_internal_nodes(backend=amCts).json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/ds_db_cache_misses_internal_nodes(backend=amIdentityStore).json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/ds_db_cache_misses_internal_nodes(backend=cfgStore).json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/ds_db_cache_misses_internal_nodes(backend=idmRepo).json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/am_authentication_count_per_pod.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/Cts_reaper_Deletion_count.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/AM_oauth2_authorization_codes.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/ds_backend_entries_deleted_amCts.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/ds_pods_replication_delay.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/am_cts_reaper_cache_size.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/am_cts_reaper_search_seconds_total.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/ds_replication_replica_replayed_updates_conflicts_resolved.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/node_disk_read_bytes_total.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/node_disk_written_bytes_total.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/ds_backend_entry_count.json does not exist. Skipping... 01:07:49 INFO File /tmp/lodemon_data-23-08-12_22:26:48/node_disk_io_time_seconds_total.json does not exist. Skipping... 127.0.0.1 - - [13/Aug/2023 01:07:51] "GET /monitoring/process HTTP/1.1" 200 -