==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: lodemon-56989b88bb-nm2fw Namespace: xlou Priority: 0 Node: gke-xlou-cdm-default-pool-f05840a3-2nsn/10.142.0.46 Start Time: Fri, 11 Aug 2023 15:22:17 +0000 Labels: app=lodemon app.kubernetes.io/name=lodemon pod-template-hash=56989b88bb skaffold.dev/run-id=a22300d0-239e-4e44-8edf-db784dda2a3e Annotations: Status: Running IP: 10.106.45.17 IPs: IP: 10.106.45.17 Controlled By: ReplicaSet/lodemon-56989b88bb Containers: lodemon: Container ID: containerd://258c39520ce6d5864f2209c1722f5745e63a3f81d45fbaa391c1a94b6483d683 Image: gcr.io/engineeringpit/lodestar-images/lodestarbox:6c23848450de3f8e82f0a619a86abcd91fc890c6 Image ID: gcr.io/engineeringpit/lodestar-images/lodestarbox@sha256:f419b98ce988c016f788d178b318b601ed56b4ebb6e1a8df68b3ff2a986af79d Port: 8080/TCP Host Port: 0/TCP Command: python3 Args: /lodestar/scripts/lodemon_run.py -W default State: Running Started: Fri, 11 Aug 2023 15:22:36 +0000 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 2Gi Requests: cpu: 1 memory: 1Gi Liveness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Readiness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: SKAFFOLD_PROFILE: medium Mounts: /lodestar/config/config.yaml from config (rw,path="config.yaml") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7f64p (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: lodemon-config Optional: false kube-api-access-7f64p: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== 16:22:37 INFO 16:22:37 INFO --------------------- Get expected number of pods --------------------- 16:22:37 INFO 16:22:37 INFO [loop_until]: kubectl --namespace=xlou get deployments --selector app=am --output jsonpath={.items[*].spec.replicas} 16:22:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:37 INFO [loop_until]: OK (rc = 0) 16:22:37 DEBUG --- stdout --- 16:22:37 DEBUG 3 16:22:37 DEBUG --- stderr --- 16:22:37 DEBUG 16:22:37 INFO 16:22:37 INFO ---------------------------- Get pod list ---------------------------- 16:22:37 INFO 16:22:37 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=am --output jsonpath={.items[*].metadata.name} 16:22:37 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 16:22:37 INFO [loop_until]: OK (rc = 0) 16:22:37 DEBUG --- stdout --- 16:22:37 DEBUG am-55f77847b7-7qk7g am-55f77847b7-ngpns am-55f77847b7-q6zcv 16:22:37 DEBUG --- stderr --- 16:22:37 DEBUG 16:22:37 INFO 16:22:37 INFO -------------- Check pod am-55f77847b7-7qk7g is running -------------- 16:22:37 INFO 16:22:37 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-7qk7g -o=jsonpath={.status.phase} | grep "Running" 16:22:37 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:37 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:37 INFO [loop_until]: OK (rc = 0) 16:22:37 DEBUG --- stdout --- 16:22:37 DEBUG Running 16:22:37 DEBUG --- stderr --- 16:22:37 DEBUG 16:22:37 INFO 16:22:37 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-7qk7g -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 16:22:37 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:37 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:37 INFO [loop_until]: OK (rc = 0) 16:22:37 DEBUG --- stdout --- 16:22:37 DEBUG true 16:22:37 DEBUG --- stderr --- 16:22:37 DEBUG 16:22:37 INFO 16:22:37 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-7qk7g --output jsonpath={.status.startTime} 16:22:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:37 INFO [loop_until]: OK (rc = 0) 16:22:37 DEBUG --- stdout --- 16:22:37 DEBUG 2023-08-11T15:13:05Z 16:22:37 DEBUG --- stderr --- 16:22:37 DEBUG 16:22:37 INFO 16:22:37 INFO ------- Check pod am-55f77847b7-7qk7g filesystem is accessible ------- 16:22:37 INFO 16:22:37 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-7qk7g --container openam -- ls / | grep "bin" 16:22:37 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:37 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:37 INFO [loop_until]: OK (rc = 0) 16:22:37 DEBUG --- stdout --- 16:22:37 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 16:22:37 DEBUG --- stderr --- 16:22:37 DEBUG 16:22:37 INFO 16:22:37 INFO ------------- Check pod am-55f77847b7-7qk7g restart count ------------- 16:22:37 INFO 16:22:37 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-7qk7g --output jsonpath={.status.containerStatuses[*].restartCount} 16:22:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:37 INFO [loop_until]: OK (rc = 0) 16:22:37 DEBUG --- stdout --- 16:22:37 DEBUG 0 16:22:37 DEBUG --- stderr --- 16:22:37 DEBUG 16:22:37 INFO Pod am-55f77847b7-7qk7g has been restarted 0 times. 16:22:37 INFO 16:22:37 INFO -------------- Check pod am-55f77847b7-ngpns is running -------------- 16:22:37 INFO 16:22:37 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-ngpns -o=jsonpath={.status.phase} | grep "Running" 16:22:37 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:37 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:37 INFO [loop_until]: OK (rc = 0) 16:22:37 DEBUG --- stdout --- 16:22:37 DEBUG Running 16:22:37 DEBUG --- stderr --- 16:22:37 DEBUG 16:22:37 INFO 16:22:37 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-ngpns -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 16:22:37 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:37 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:37 INFO [loop_until]: OK (rc = 0) 16:22:37 DEBUG --- stdout --- 16:22:37 DEBUG true 16:22:37 DEBUG --- stderr --- 16:22:37 DEBUG 16:22:37 INFO 16:22:37 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-ngpns --output jsonpath={.status.startTime} 16:22:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:37 INFO [loop_until]: OK (rc = 0) 16:22:37 DEBUG --- stdout --- 16:22:37 DEBUG 2023-08-11T15:13:05Z 16:22:37 DEBUG --- stderr --- 16:22:37 DEBUG 16:22:37 INFO 16:22:37 INFO ------- Check pod am-55f77847b7-ngpns filesystem is accessible ------- 16:22:37 INFO 16:22:37 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-ngpns --container openam -- ls / | grep "bin" 16:22:37 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:37 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:37 INFO [loop_until]: OK (rc = 0) 16:22:37 DEBUG --- stdout --- 16:22:37 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 16:22:37 DEBUG --- stderr --- 16:22:37 DEBUG 16:22:37 INFO 16:22:37 INFO ------------- Check pod am-55f77847b7-ngpns restart count ------------- 16:22:37 INFO 16:22:37 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-ngpns --output jsonpath={.status.containerStatuses[*].restartCount} 16:22:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG 0 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO Pod am-55f77847b7-ngpns has been restarted 0 times. 16:22:38 INFO 16:22:38 INFO -------------- Check pod am-55f77847b7-q6zcv is running -------------- 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-q6zcv -o=jsonpath={.status.phase} | grep "Running" 16:22:38 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG Running 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-q6zcv -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 16:22:38 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG true 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-q6zcv --output jsonpath={.status.startTime} 16:22:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG 2023-08-11T15:13:05Z 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO 16:22:38 INFO ------- Check pod am-55f77847b7-q6zcv filesystem is accessible ------- 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-q6zcv --container openam -- ls / | grep "bin" 16:22:38 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO 16:22:38 INFO ------------- Check pod am-55f77847b7-q6zcv restart count ------------- 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-q6zcv --output jsonpath={.status.containerStatuses[*].restartCount} 16:22:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG 0 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO Pod am-55f77847b7-q6zcv has been restarted 0 times. 16:22:38 INFO 16:22:38 INFO --------------------- Get expected number of pods --------------------- 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get deployment --selector app=idm --output jsonpath={.items[*].spec.replicas} 16:22:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG 2 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO 16:22:38 INFO ---------------------------- Get pod list ---------------------------- 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=idm --output jsonpath={.items[*].metadata.name} 16:22:38 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG idm-65858d8c4c-5kwbg idm-65858d8c4c-8ff69 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO 16:22:38 INFO -------------- Check pod idm-65858d8c4c-5kwbg is running -------------- 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-5kwbg -o=jsonpath={.status.phase} | grep "Running" 16:22:38 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG Running 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-5kwbg -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 16:22:38 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG true 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-5kwbg --output jsonpath={.status.startTime} 16:22:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG 2023-08-11T15:13:05Z 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO 16:22:38 INFO ------- Check pod idm-65858d8c4c-5kwbg filesystem is accessible ------- 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-5kwbg --container openidm -- ls / | grep "bin" 16:22:38 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO 16:22:38 INFO ------------ Check pod idm-65858d8c4c-5kwbg restart count ------------ 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-5kwbg --output jsonpath={.status.containerStatuses[*].restartCount} 16:22:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG 0 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO Pod idm-65858d8c4c-5kwbg has been restarted 0 times. 16:22:38 INFO 16:22:38 INFO -------------- Check pod idm-65858d8c4c-8ff69 is running -------------- 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-8ff69 -o=jsonpath={.status.phase} | grep "Running" 16:22:38 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG Running 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-8ff69 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 16:22:38 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG true 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-8ff69 --output jsonpath={.status.startTime} 16:22:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG 2023-08-11T15:13:05Z 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO 16:22:38 INFO ------- Check pod idm-65858d8c4c-8ff69 filesystem is accessible ------- 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-8ff69 --container openidm -- ls / | grep "bin" 16:22:38 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO 16:22:38 INFO ------------ Check pod idm-65858d8c4c-8ff69 restart count ------------ 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-8ff69 --output jsonpath={.status.containerStatuses[*].restartCount} 16:22:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:38 INFO [loop_until]: OK (rc = 0) 16:22:38 DEBUG --- stdout --- 16:22:38 DEBUG 0 16:22:38 DEBUG --- stderr --- 16:22:38 DEBUG 16:22:38 INFO Pod idm-65858d8c4c-8ff69 has been restarted 0 times. 16:22:38 INFO 16:22:38 INFO --------------------- Get expected number of pods --------------------- 16:22:38 INFO 16:22:38 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-idrepo --output jsonpath={.items[*].spec.replicas} 16:22:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG 3 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO 16:22:39 INFO ---------------------------- Get pod list ---------------------------- 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-idrepo --output jsonpath={.items[*].metadata.name} 16:22:39 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO 16:22:39 INFO ------------------ Check pod ds-idrepo-0 is running ------------------ 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" 16:22:39 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG Running 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 16:22:39 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG true 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.startTime} 16:22:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG 2023-08-11T14:39:18Z 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO 16:22:39 INFO ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 --container ds -- ls / | grep "bin" 16:22:39 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO 16:22:39 INFO ----------------- Check pod ds-idrepo-0 restart count ----------------- 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.containerStatuses[*].restartCount} 16:22:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG 0 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO Pod ds-idrepo-0 has been restarted 0 times. 16:22:39 INFO 16:22:39 INFO ------------------ Check pod ds-idrepo-1 is running ------------------ 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" 16:22:39 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG Running 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 16:22:39 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG true 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.startTime} 16:22:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG 2023-08-11T14:51:08Z 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO 16:22:39 INFO ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 --container ds -- ls / | grep "bin" 16:22:39 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO 16:22:39 INFO ----------------- Check pod ds-idrepo-1 restart count ----------------- 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.containerStatuses[*].restartCount} 16:22:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG 0 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO Pod ds-idrepo-1 has been restarted 0 times. 16:22:39 INFO 16:22:39 INFO ------------------ Check pod ds-idrepo-2 is running ------------------ 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" 16:22:39 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG Running 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 16:22:39 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG true 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.startTime} 16:22:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG 2023-08-11T15:02:13Z 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO 16:22:39 INFO ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 --container ds -- ls / | grep "bin" 16:22:39 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO 16:22:39 INFO ----------------- Check pod ds-idrepo-2 restart count ----------------- 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.containerStatuses[*].restartCount} 16:22:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:39 INFO [loop_until]: OK (rc = 0) 16:22:39 DEBUG --- stdout --- 16:22:39 DEBUG 0 16:22:39 DEBUG --- stderr --- 16:22:39 DEBUG 16:22:39 INFO Pod ds-idrepo-2 has been restarted 0 times. 16:22:39 INFO 16:22:39 INFO --------------------- Get expected number of pods --------------------- 16:22:39 INFO 16:22:39 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-cts --output jsonpath={.items[*].spec.replicas} 16:22:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG 3 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO 16:22:40 INFO ---------------------------- Get pod list ---------------------------- 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-cts --output jsonpath={.items[*].metadata.name} 16:22:40 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG ds-cts-0 ds-cts-1 ds-cts-2 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO 16:22:40 INFO -------------------- Check pod ds-cts-0 is running -------------------- 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" 16:22:40 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG Running 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 16:22:40 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG true 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.startTime} 16:22:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG 2023-08-11T14:39:18Z 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO 16:22:40 INFO ------------- Check pod ds-cts-0 filesystem is accessible ------------- 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-0 --container ds -- ls / | grep "bin" 16:22:40 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO 16:22:40 INFO ------------------ Check pod ds-cts-0 restart count ------------------ 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.containerStatuses[*].restartCount} 16:22:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG 0 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO Pod ds-cts-0 has been restarted 0 times. 16:22:40 INFO 16:22:40 INFO -------------------- Check pod ds-cts-1 is running -------------------- 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" 16:22:40 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG Running 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 16:22:40 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG true 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.startTime} 16:22:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG 2023-08-11T14:39:40Z 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO 16:22:40 INFO ------------- Check pod ds-cts-1 filesystem is accessible ------------- 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-1 --container ds -- ls / | grep "bin" 16:22:40 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO 16:22:40 INFO ------------------ Check pod ds-cts-1 restart count ------------------ 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.containerStatuses[*].restartCount} 16:22:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG 0 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO Pod ds-cts-1 has been restarted 0 times. 16:22:40 INFO 16:22:40 INFO -------------------- Check pod ds-cts-2 is running -------------------- 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" 16:22:40 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG Running 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 16:22:40 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG true 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.startTime} 16:22:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG 2023-08-11T14:40:04Z 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO 16:22:40 INFO ------------- Check pod ds-cts-2 filesystem is accessible ------------- 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-2 --container ds -- ls / | grep "bin" 16:22:40 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO 16:22:40 INFO ------------------ Check pod ds-cts-2 restart count ------------------ 16:22:40 INFO 16:22:40 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.containerStatuses[*].restartCount} 16:22:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:40 INFO [loop_until]: OK (rc = 0) 16:22:40 DEBUG --- stdout --- 16:22:40 DEBUG 0 16:22:40 DEBUG --- stderr --- 16:22:40 DEBUG 16:22:40 INFO Pod ds-cts-2 has been restarted 0 times. * Serving Flask app 'lodemon_run' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8080 * Running on http://10.106.45.17:8080 Press CTRL+C to quit 16:23:04 INFO 16:23:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:04 INFO [loop_until]: OK (rc = 0) 16:23:04 DEBUG --- stdout --- 16:23:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:04 DEBUG --- stderr --- 16:23:04 DEBUG 16:23:04 INFO 16:23:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:04 INFO [loop_until]: OK (rc = 0) 16:23:04 DEBUG --- stdout --- 16:23:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:04 DEBUG --- stderr --- 16:23:04 DEBUG 16:23:04 INFO 16:23:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:04 INFO [loop_until]: OK (rc = 0) 16:23:04 DEBUG --- stdout --- 16:23:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:04 DEBUG --- stderr --- 16:23:04 DEBUG 16:23:04 INFO 16:23:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:05 INFO [loop_until]: OK (rc = 0) 16:23:05 DEBUG --- stdout --- 16:23:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:05 DEBUG --- stderr --- 16:23:05 DEBUG 16:23:05 INFO 16:23:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:05 INFO [loop_until]: OK (rc = 0) 16:23:05 DEBUG --- stdout --- 16:23:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:05 DEBUG --- stderr --- 16:23:05 DEBUG 16:23:05 INFO 16:23:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:05 INFO [loop_until]: OK (rc = 0) 16:23:05 DEBUG --- stdout --- 16:23:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:05 DEBUG --- stderr --- 16:23:05 DEBUG 16:23:05 INFO 16:23:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:05 INFO [loop_until]: OK (rc = 0) 16:23:05 DEBUG --- stdout --- 16:23:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:05 DEBUG --- stderr --- 16:23:05 DEBUG 16:23:05 INFO 16:23:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:05 INFO [loop_until]: OK (rc = 0) 16:23:05 DEBUG --- stdout --- 16:23:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:05 DEBUG --- stderr --- 16:23:05 DEBUG 16:23:05 INFO 16:23:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:05 INFO [loop_until]: OK (rc = 0) 16:23:05 DEBUG --- stdout --- 16:23:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:05 DEBUG --- stderr --- 16:23:05 DEBUG 16:23:05 INFO 16:23:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:05 INFO [loop_until]: OK (rc = 0) 16:23:05 DEBUG --- stdout --- 16:23:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:05 DEBUG --- stderr --- 16:23:05 DEBUG 16:23:05 INFO 16:23:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:05 INFO [loop_until]: OK (rc = 0) 16:23:05 DEBUG --- stdout --- 16:23:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:05 DEBUG --- stderr --- 16:23:05 DEBUG 16:23:05 INFO 16:23:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:06 INFO [loop_until]: OK (rc = 0) 16:23:06 DEBUG --- stdout --- 16:23:06 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:06 DEBUG --- stderr --- 16:23:06 DEBUG 16:23:06 INFO 16:23:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:06 INFO [loop_until]: OK (rc = 0) 16:23:06 DEBUG --- stdout --- 16:23:06 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:06 DEBUG --- stderr --- 16:23:06 DEBUG 16:23:06 INFO 16:23:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:06 INFO [loop_until]: OK (rc = 0) 16:23:06 DEBUG --- stdout --- 16:23:06 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:06 DEBUG --- stderr --- 16:23:06 DEBUG 16:23:06 INFO 16:23:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:06 INFO [loop_until]: OK (rc = 0) 16:23:06 DEBUG --- stdout --- 16:23:06 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:06 DEBUG --- stderr --- 16:23:06 DEBUG 16:23:06 INFO 16:23:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:06 INFO [loop_until]: OK (rc = 0) 16:23:06 DEBUG --- stdout --- 16:23:06 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:06 DEBUG --- stderr --- 16:23:06 DEBUG 16:23:06 INFO 16:23:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:06 INFO [loop_until]: OK (rc = 0) 16:23:06 DEBUG --- stdout --- 16:23:06 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:06 DEBUG --- stderr --- 16:23:06 DEBUG 16:23:06 INFO 16:23:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:06 INFO [loop_until]: OK (rc = 0) 16:23:06 DEBUG --- stdout --- 16:23:06 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:06 DEBUG --- stderr --- 16:23:06 DEBUG 16:23:06 INFO 16:23:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:07 INFO [loop_until]: OK (rc = 0) 16:23:07 DEBUG --- stdout --- 16:23:07 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:07 DEBUG --- stderr --- 16:23:07 DEBUG 16:23:07 INFO 16:23:07 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:07 INFO [loop_until]: OK (rc = 0) 16:23:07 DEBUG --- stdout --- 16:23:07 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:07 DEBUG --- stderr --- 16:23:07 DEBUG 16:23:07 INFO 16:23:07 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:07 INFO [loop_until]: OK (rc = 0) 16:23:07 DEBUG --- stdout --- 16:23:07 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:07 DEBUG --- stderr --- 16:23:07 DEBUG 16:23:07 INFO 16:23:07 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:07 INFO [loop_until]: OK (rc = 0) 16:23:07 DEBUG --- stdout --- 16:23:07 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:07 DEBUG --- stderr --- 16:23:07 DEBUG 16:23:07 INFO 16:23:07 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:07 INFO [loop_until]: OK (rc = 0) 16:23:07 DEBUG --- stdout --- 16:23:07 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:07 DEBUG --- stderr --- 16:23:07 DEBUG 16:23:07 INFO 16:23:07 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:07 INFO [loop_until]: OK (rc = 0) 16:23:07 DEBUG --- stdout --- 16:23:07 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:07 DEBUG --- stderr --- 16:23:07 DEBUG 16:23:07 INFO 16:23:07 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:07 INFO [loop_until]: OK (rc = 0) 16:23:07 DEBUG --- stdout --- 16:23:07 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:07 DEBUG --- stderr --- 16:23:07 DEBUG 16:23:07 INFO 16:23:07 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:07 INFO [loop_until]: OK (rc = 0) 16:23:07 DEBUG --- stdout --- 16:23:07 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:07 DEBUG --- stderr --- 16:23:07 DEBUG 16:23:08 INFO 16:23:08 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 16:23:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:08 INFO [loop_until]: OK (rc = 0) 16:23:08 DEBUG --- stdout --- 16:23:08 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 16:23:08 DEBUG --- stderr --- 16:23:08 DEBUG 16:23:08 INFO Initializing monitoring instance threads 16:23:08 DEBUG Monitoring instance thread list: [, , , , , , , , , , , , , , , , , , , , , , , , , , , , ] 16:23:08 INFO Starting instance threads 16:23:08 INFO 16:23:08 INFO Thread started 16:23:08 INFO [loop_until]: kubectl --namespace=xlou top node 16:23:08 INFO 16:23:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:08 INFO Thread started 16:23:08 INFO [loop_until]: kubectl --namespace=xlou top pods 16:23:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388" 16:23:08 INFO Thread started Exception in thread Thread-23: 16:23:08 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner Exception in thread Thread-24: Traceback (most recent call last): 16:23:08 INFO Thread started 16:23:08 INFO Thread started Exception in thread Thread-25: 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691767388" File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 16:23:08 INFO Thread started self.run() 16:23:08 INFO [loop_until]: OK (rc = 0) Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 910, in run 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691767388" 16:23:08 INFO Thread started self.run() File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 16:23:08 DEBUG --- stdout --- File "/usr/local/lib/python3.9/threading.py", line 910, in run 16:23:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4302Mi am-55f77847b7-ngpns 8m 4323Mi am-55f77847b7-q6zcv 14m 4426Mi ds-cts-0 8m 382Mi ds-cts-1 10m 357Mi ds-cts-2 11m 357Mi ds-idrepo-0 24m 10320Mi ds-idrepo-1 18m 10298Mi ds-idrepo-2 26m 10324Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 9m 1252Mi idm-65858d8c4c-8ff69 9m 3447Mi lodemon-56989b88bb-nm2fw 721m 61Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 15Mi self._target(*self._args, **self._kwargs) 16:23:08 INFO Thread started Exception in thread Thread-28: 16:23:08 DEBUG --- stderr --- self.run() 16:23:08 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388" self._target(*self._args, **self._kwargs) 16:23:08 DEBUG File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop 16:23:08 INFO Thread started 16:23:08 INFO All threads has been started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner instance.run() self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run instance.run() self.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run if self.prom_data['functions']: File "/usr/local/lib/python3.9/threading.py", line 910, in run 127.0.0.1 - - [11/Aug/2023 16:23:08] "GET /monitoring/start HTTP/1.1" 200 - if self.prom_data['functions']: KeyError: 'functions' File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop KeyError: 'functions' self._target(*self._args, **self._kwargs) instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run instance.run() if self.prom_data['functions']: File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run KeyError: 'functions' if self.prom_data['functions']: KeyError: 'functions' 16:23:08 INFO [loop_until]: OK (rc = 0) 16:23:08 DEBUG --- stdout --- 16:23:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 1451m 9% 1250Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5259Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 5494Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5375Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 4733Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2094Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2215Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1044Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1040Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 78m 0% 10939Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 78m 0% 10960Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 75m 0% 10915Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 70m 0% 1539Mi 2% 16:23:08 DEBUG --- stderr --- 16:23:08 DEBUG 16:23:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:09 WARNING Response is NONE 16:23:09 DEBUG Exception is preset. Setting retry_loop to true 16:23:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:11 WARNING Response is NONE 16:23:11 WARNING Response is NONE 16:23:11 DEBUG Exception is preset. Setting retry_loop to true 16:23:11 DEBUG Exception is preset. Setting retry_loop to true 16:23:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:15 WARNING Response is NONE 16:23:15 WARNING Response is NONE 16:23:15 DEBUG Exception is preset. Setting retry_loop to true 16:23:15 WARNING Response is NONE 16:23:15 DEBUG Exception is preset. Setting retry_loop to true 16:23:15 WARNING Response is NONE 16:23:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:15 DEBUG Exception is preset. Setting retry_loop to true 16:23:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:15 DEBUG Exception is preset. Setting retry_loop to true 16:23:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:20 WARNING Response is NONE 16:23:20 DEBUG Exception is preset. Setting retry_loop to true 16:23:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:22 WARNING Response is NONE 16:23:22 WARNING Response is NONE 16:23:22 DEBUG Exception is preset. Setting retry_loop to true 16:23:22 DEBUG Exception is preset. Setting retry_loop to true 16:23:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:23 WARNING Response is NONE 16:23:23 WARNING Response is NONE 16:23:23 DEBUG Exception is preset. Setting retry_loop to true 16:23:23 DEBUG Exception is preset. Setting retry_loop to true 16:23:23 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:23 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:26 WARNING Response is NONE 16:23:26 DEBUG Exception is preset. Setting retry_loop to true 16:23:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:28 WARNING Response is NONE 16:23:28 WARNING Response is NONE 16:23:28 DEBUG Exception is preset. Setting retry_loop to true 16:23:28 DEBUG Exception is preset. Setting retry_loop to true 16:23:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:31 WARNING Response is NONE 16:23:31 DEBUG Exception is preset. Setting retry_loop to true 16:23:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:33 WARNING Response is NONE 16:23:33 DEBUG Exception is preset. Setting retry_loop to true 16:23:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:34 WARNING Response is NONE 16:23:34 DEBUG Exception is preset. Setting retry_loop to true 16:23:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:35 WARNING Response is NONE 16:23:35 DEBUG Exception is preset. Setting retry_loop to true 16:23:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:37 WARNING Response is NONE 16:23:37 DEBUG Exception is preset. Setting retry_loop to true 16:23:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:39 WARNING Response is NONE 16:23:39 DEBUG Exception is preset. Setting retry_loop to true 16:23:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:40 WARNING Response is NONE 16:23:40 DEBUG Exception is preset. Setting retry_loop to true 16:23:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:42 WARNING Response is NONE 16:23:42 DEBUG Exception is preset. Setting retry_loop to true 16:23:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:44 WARNING Response is NONE 16:23:44 DEBUG Exception is preset. Setting retry_loop to true 16:23:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:46 WARNING Response is NONE 16:23:46 DEBUG Exception is preset. Setting retry_loop to true 16:23:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:48 WARNING Response is NONE 16:23:48 DEBUG Exception is preset. Setting retry_loop to true 16:23:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:50 WARNING Response is NONE 16:23:50 DEBUG Exception is preset. Setting retry_loop to true 16:23:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:51 WARNING Response is NONE 16:23:51 DEBUG Exception is preset. Setting retry_loop to true 16:23:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:53 WARNING Response is NONE 16:23:53 DEBUG Exception is preset. Setting retry_loop to true 16:23:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:55 WARNING Response is NONE 16:23:55 DEBUG Exception is preset. Setting retry_loop to true 16:23:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:57 WARNING Response is NONE 16:23:57 DEBUG Exception is preset. Setting retry_loop to true 16:23:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:23:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:23:59 WARNING Response is NONE 16:23:59 DEBUG Exception is preset. Setting retry_loop to true 16:23:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:24:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:01 WARNING Response is NONE 16:24:01 DEBUG Exception is preset. Setting retry_loop to true 16:24:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:24:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:02 WARNING Response is NONE 16:24:02 DEBUG Exception is preset. Setting retry_loop to true 16:24:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:24:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:04 WARNING Response is NONE 16:24:04 DEBUG Exception is preset. Setting retry_loop to true 16:24:04 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-8: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:24:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:06 WARNING Response is NONE 16:24:06 DEBUG Exception is preset. Setting retry_loop to true 16:24:06 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-3: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:24:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:08 WARNING Response is NONE 16:24:08 DEBUG Exception is preset. Setting retry_loop to true 16:24:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:24:08 INFO 16:24:08 INFO [loop_until]: kubectl --namespace=xlou top pods 16:24:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:24:08 INFO 16:24:08 INFO [loop_until]: kubectl --namespace=xlou top node 16:24:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:24:08 INFO [loop_until]: OK (rc = 0) 16:24:08 DEBUG --- stdout --- 16:24:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4302Mi am-55f77847b7-ngpns 7m 4324Mi am-55f77847b7-q6zcv 12m 4426Mi ds-cts-0 9m 393Mi ds-cts-1 9m 365Mi ds-cts-2 159m 357Mi ds-idrepo-0 1289m 10325Mi ds-idrepo-1 37m 10301Mi ds-idrepo-2 191m 10330Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 8m 1290Mi idm-65858d8c4c-8ff69 11m 3448Mi lodemon-56989b88bb-nm2fw 3m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 178m 48Mi 16:24:08 DEBUG --- stderr --- 16:24:08 DEBUG 16:24:08 INFO [loop_until]: OK (rc = 0) 16:24:08 DEBUG --- stdout --- 16:24:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 59m 0% 5261Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5498Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5374Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 4736Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2091Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2254Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 87m 0% 1042Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 134m 0% 1046Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 124m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 784m 4% 10945Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 164m 1% 10978Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 178m 1% 10915Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 263m 1% 1540Mi 2% 16:24:08 DEBUG --- stderr --- 16:24:08 DEBUG 16:24:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:10 WARNING Response is NONE 16:24:10 DEBUG Exception is preset. Setting retry_loop to true 16:24:10 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-5: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:24:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:12 WARNING Response is NONE 16:24:12 DEBUG Exception is preset. Setting retry_loop to true 16:24:12 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-4: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:24:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:13 WARNING Response is NONE 16:24:13 DEBUG Exception is preset. Setting retry_loop to true 16:24:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:24:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:16 WARNING Response is NONE 16:24:16 DEBUG Exception is preset. Setting retry_loop to true 16:24:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:24:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:19 WARNING Response is NONE 16:24:19 DEBUG Exception is preset. Setting retry_loop to true 16:24:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:24:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:24 WARNING Response is NONE 16:24:24 DEBUG Exception is preset. Setting retry_loop to true 16:24:24 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-11: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:24:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:27 WARNING Response is NONE 16:24:27 DEBUG Exception is preset. Setting retry_loop to true 16:24:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:24:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:28 WARNING Response is NONE 16:24:28 DEBUG Exception is preset. Setting retry_loop to true 16:24:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:24:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:30 WARNING Response is NONE 16:24:30 DEBUG Exception is preset. Setting retry_loop to true 16:24:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:24:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:38 WARNING Response is NONE 16:24:38 DEBUG Exception is preset. Setting retry_loop to true 16:24:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:24:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:39 WARNING Response is NONE 16:24:39 DEBUG Exception is preset. Setting retry_loop to true 16:24:39 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-7: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:24:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:41 WARNING Response is NONE 16:24:41 DEBUG Exception is preset. Setting retry_loop to true 16:24:41 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-6: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:24:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:42 WARNING Response is NONE 16:24:42 DEBUG Exception is preset. Setting retry_loop to true 16:24:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:24:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:49 WARNING Response is NONE 16:24:49 DEBUG Exception is preset. Setting retry_loop to true Exception in thread Thread-17: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run 16:24:49 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:24:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:24:53 WARNING Response is NONE 16:24:53 DEBUG Exception is preset. Setting retry_loop to true 16:24:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:04 WARNING Response is NONE 16:25:04 DEBUG Exception is preset. Setting retry_loop to true 16:25:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:08 INFO 16:25:08 INFO [loop_until]: kubectl --namespace=xlou top pods 16:25:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:25:08 INFO [loop_until]: OK (rc = 0) 16:25:08 DEBUG --- stdout --- 16:25:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4302Mi am-55f77847b7-ngpns 5m 4324Mi am-55f77847b7-q6zcv 16m 4427Mi ds-cts-0 13m 393Mi ds-cts-1 9m 366Mi ds-cts-2 11m 358Mi ds-idrepo-0 19m 10326Mi ds-idrepo-1 19m 10302Mi ds-idrepo-2 19m 10331Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 7m 1290Mi idm-65858d8c4c-8ff69 10m 3448Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 48Mi 16:25:08 DEBUG --- stderr --- 16:25:08 DEBUG 16:25:08 INFO 16:25:08 INFO [loop_until]: kubectl --namespace=xlou top node 16:25:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:25:08 INFO [loop_until]: OK (rc = 0) 16:25:08 DEBUG --- stdout --- 16:25:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 5262Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5496Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 59m 0% 5374Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 4739Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2092Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 2254Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1042Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1051Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 10948Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 70m 0% 10967Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 10918Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 70m 0% 1552Mi 2% 16:25:08 DEBUG --- stderr --- 16:25:08 DEBUG 16:25:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:15 WARNING Response is NONE 16:25:15 DEBUG Exception is preset. Setting retry_loop to true 16:25:15 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-9: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:25:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 16:25:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 16:25:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 16:25:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 16:25:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 16:25:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 16:25:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 16:25:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 16:25:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 16:25:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 16:25:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 16:25:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 16:25:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 16:25:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 16:25:17 WARNING Response is NONE 16:25:17 WARNING Response is NONE 16:25:17 WARNING Response is NONE 16:25:17 WARNING Response is NONE 16:25:17 WARNING Response is NONE 16:25:17 WARNING Response is NONE 16:25:17 WARNING Response is NONE 16:25:17 WARNING Response is NONE 16:25:17 WARNING Response is NONE 16:25:17 WARNING Response is NONE 16:25:17 WARNING Response is NONE 16:25:17 WARNING Response is NONE 16:25:17 WARNING Response is NONE 16:25:17 WARNING Response is NONE 16:25:17 DEBUG Exception is preset. Setting retry_loop to true 16:25:17 DEBUG Exception is preset. Setting retry_loop to true 16:25:17 DEBUG Exception is preset. Setting retry_loop to true 16:25:17 DEBUG Exception is preset. Setting retry_loop to true 16:25:17 DEBUG Exception is preset. Setting retry_loop to true 16:25:17 DEBUG Exception is preset. Setting retry_loop to true 16:25:17 DEBUG Exception is preset. Setting retry_loop to true 16:25:17 DEBUG Exception is preset. Setting retry_loop to true 16:25:17 DEBUG Exception is preset. Setting retry_loop to true 16:25:17 DEBUG Exception is preset. Setting retry_loop to true 16:25:17 DEBUG Exception is preset. Setting retry_loop to true 16:25:17 DEBUG Exception is preset. Setting retry_loop to true 16:25:17 DEBUG Exception is preset. Setting retry_loop to true 16:25:17 DEBUG Exception is preset. Setting retry_loop to true 16:25:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:28 WARNING Response is NONE 16:25:28 DEBUG Exception is preset. Setting retry_loop to true 16:25:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:30 WARNING Response is NONE 16:25:30 WARNING Response is NONE 16:25:30 DEBUG Exception is preset. Setting retry_loop to true 16:25:30 DEBUG Exception is preset. Setting retry_loop to true 16:25:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:34 WARNING Response is NONE 16:25:34 WARNING Response is NONE 16:25:34 WARNING Response is NONE 16:25:34 WARNING Response is NONE 16:25:34 WARNING Response is NONE 16:25:34 DEBUG Exception is preset. Setting retry_loop to true 16:25:34 DEBUG Exception is preset. Setting retry_loop to true 16:25:34 DEBUG Exception is preset. Setting retry_loop to true 16:25:34 DEBUG Exception is preset. Setting retry_loop to true 16:25:34 DEBUG Exception is preset. Setting retry_loop to true 16:25:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:39 WARNING Response is NONE 16:25:39 DEBUG Exception is preset. Setting retry_loop to true 16:25:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:41 WARNING Response is NONE 16:25:41 WARNING Response is NONE 16:25:41 DEBUG Exception is preset. Setting retry_loop to true 16:25:41 DEBUG Exception is preset. Setting retry_loop to true 16:25:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:42 WARNING Response is NONE 16:25:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:42 WARNING Response is NONE 16:25:42 DEBUG Exception is preset. Setting retry_loop to true 16:25:42 WARNING Response is NONE 16:25:42 WARNING Response is NONE 16:25:42 DEBUG Exception is preset. Setting retry_loop to true 16:25:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:42 DEBUG Exception is preset. Setting retry_loop to true 16:25:42 DEBUG Exception is preset. Setting retry_loop to true 16:25:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:45 WARNING Response is NONE 16:25:45 DEBUG Exception is preset. Setting retry_loop to true 16:25:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:47 WARNING Response is NONE 16:25:47 WARNING Response is NONE 16:25:47 DEBUG Exception is preset. Setting retry_loop to true 16:25:47 DEBUG Exception is preset. Setting retry_loop to true 16:25:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:50 WARNING Response is NONE 16:25:50 DEBUG Exception is preset. Setting retry_loop to true 16:25:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:52 WARNING Response is NONE 16:25:52 DEBUG Exception is preset. Setting retry_loop to true 16:25:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:53 WARNING Response is NONE 16:25:53 WARNING Response is NONE 16:25:53 DEBUG Exception is preset. Setting retry_loop to true 16:25:53 DEBUG Exception is preset. Setting retry_loop to true 16:25:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:54 WARNING Response is NONE 16:25:54 DEBUG Exception is preset. Setting retry_loop to true 16:25:54 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:56 WARNING Response is NONE 16:25:56 DEBUG Exception is preset. Setting retry_loop to true 16:25:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:58 WARNING Response is NONE 16:25:58 DEBUG Exception is preset. Setting retry_loop to true 16:25:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:25:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:25:59 WARNING Response is NONE 16:25:59 DEBUG Exception is preset. Setting retry_loop to true 16:25:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:01 WARNING Response is NONE 16:26:01 DEBUG Exception is preset. Setting retry_loop to true 16:26:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:03 WARNING Response is NONE 16:26:03 DEBUG Exception is preset. Setting retry_loop to true 16:26:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:05 WARNING Response is NONE 16:26:05 DEBUG Exception is preset. Setting retry_loop to true 16:26:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:07 WARNING Response is NONE 16:26:07 DEBUG Exception is preset. Setting retry_loop to true 16:26:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:08 WARNING Response is NONE 16:26:08 DEBUG Exception is preset. Setting retry_loop to true 16:26:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:08 INFO 16:26:08 INFO [loop_until]: kubectl --namespace=xlou top pods 16:26:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:26:08 INFO [loop_until]: OK (rc = 0) 16:26:08 DEBUG --- stdout --- 16:26:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4302Mi am-55f77847b7-ngpns 9m 4324Mi am-55f77847b7-q6zcv 14m 4427Mi ds-cts-0 7m 394Mi ds-cts-1 12m 366Mi ds-cts-2 12m 359Mi ds-idrepo-0 15m 10326Mi ds-idrepo-1 33m 10295Mi ds-idrepo-2 29m 10334Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 1290Mi idm-65858d8c4c-8ff69 7m 3448Mi lodemon-56989b88bb-nm2fw 4m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 16m 84Mi 16:26:08 DEBUG --- stderr --- 16:26:08 DEBUG 16:26:08 INFO 16:26:08 INFO [loop_until]: kubectl --namespace=xlou top node 16:26:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:26:08 INFO [loop_until]: OK (rc = 0) 16:26:08 DEBUG --- stdout --- 16:26:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1255Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 5259Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5497Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5374Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 4737Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 133m 0% 2090Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2253Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1045Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1046Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 10950Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 83m 0% 10969Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 86m 0% 10913Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 354m 2% 1649Mi 2% 16:26:08 DEBUG --- stderr --- 16:26:08 DEBUG 16:26:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:09 WARNING Response is NONE 16:26:09 DEBUG Exception is preset. Setting retry_loop to true 16:26:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:10 WARNING Response is NONE 16:26:10 DEBUG Exception is preset. Setting retry_loop to true 16:26:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:12 WARNING Response is NONE 16:26:12 DEBUG Exception is preset. Setting retry_loop to true 16:26:12 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-26: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:26:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:14 WARNING Response is NONE 16:26:14 DEBUG Exception is preset. Setting retry_loop to true 16:26:14 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-10: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:26:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:16 WARNING Response is NONE 16:26:16 DEBUG Exception is preset. Setting retry_loop to true 16:26:16 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-27: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:26:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:19 WARNING Response is NONE 16:26:19 DEBUG Exception is preset. Setting retry_loop to true 16:26:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:20 WARNING Response is NONE 16:26:20 DEBUG Exception is preset. Setting retry_loop to true 16:26:20 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-16: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:26:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:21 WARNING Response is NONE 16:26:21 DEBUG Exception is preset. Setting retry_loop to true 16:26:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:24 WARNING Response is NONE 16:26:24 DEBUG Exception is preset. Setting retry_loop to true 16:26:24 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-19: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:26:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:24 WARNING Response is NONE 16:26:24 DEBUG Exception is preset. Setting retry_loop to true 16:26:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:29 WARNING Response is NONE 16:26:29 DEBUG Exception is preset. Setting retry_loop to true 16:26:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:30 WARNING Response is NONE 16:26:30 DEBUG Exception is preset. Setting retry_loop to true 16:26:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:31 WARNING Response is NONE 16:26:31 DEBUG Exception is preset. Setting retry_loop to true 16:26:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:31 WARNING Response is NONE 16:26:31 DEBUG Exception is preset. Setting retry_loop to true 16:26:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:32 WARNING Response is NONE 16:26:32 DEBUG Exception is preset. Setting retry_loop to true 16:26:32 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-22: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:26:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:35 WARNING Response is NONE 16:26:35 WARNING Response is NONE 16:26:35 DEBUG Exception is preset. Setting retry_loop to true 16:26:35 DEBUG Exception is preset. Setting retry_loop to true 16:26:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:35 WARNING Response is NONE 16:26:35 DEBUG Exception is preset. Setting retry_loop to true 16:26:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:40 WARNING Response is NONE 16:26:40 DEBUG Exception is preset. Setting retry_loop to true 16:26:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:41 WARNING Response is NONE 16:26:41 DEBUG Exception is preset. Setting retry_loop to true 16:26:41 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-15: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:26:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:42 WARNING Response is NONE 16:26:42 WARNING Response is NONE 16:26:42 DEBUG Exception is preset. Setting retry_loop to true 16:26:42 DEBUG Exception is preset. Setting retry_loop to true 16:26:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:46 WARNING Response is NONE 16:26:46 DEBUG Exception is preset. Setting retry_loop to true 16:26:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:48 WARNING Response is NONE 16:26:48 DEBUG Exception is preset. Setting retry_loop to true 16:26:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:48 WARNING Response is NONE 16:26:48 DEBUG Exception is preset. Setting retry_loop to true 16:26:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:51 WARNING Response is NONE 16:26:51 DEBUG Exception is preset. Setting retry_loop to true 16:26:51 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-12: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:26:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:53 WARNING Response is NONE 16:26:53 DEBUG Exception is preset. Setting retry_loop to true 16:26:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:55 WARNING Response is NONE 16:26:55 DEBUG Exception is preset. Setting retry_loop to true 16:26:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:26:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:57 WARNING Response is NONE 16:26:57 DEBUG Exception is preset. Setting retry_loop to true 16:26:57 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-20: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:26:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:26:59 WARNING Response is NONE 16:26:59 DEBUG Exception is preset. Setting retry_loop to true 16:26:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:27:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:27:00 WARNING Response is NONE 16:27:00 DEBUG Exception is preset. Setting retry_loop to true 16:27:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:27:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:27:01 WARNING Response is NONE 16:27:01 DEBUG Exception is preset. Setting retry_loop to true 16:27:01 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-13: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:27:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:27:04 WARNING Response is NONE 16:27:04 DEBUG Exception is preset. Setting retry_loop to true 16:27:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:27:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:27:06 WARNING Response is NONE 16:27:06 DEBUG Exception is preset. Setting retry_loop to true 16:27:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:27:08 INFO 16:27:08 INFO [loop_until]: kubectl --namespace=xlou top pods 16:27:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:27:08 INFO [loop_until]: OK (rc = 0) 16:27:08 DEBUG --- stdout --- 16:27:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 19m 4297Mi am-55f77847b7-ngpns 13m 4325Mi am-55f77847b7-q6zcv 10m 4430Mi ds-cts-0 9m 394Mi ds-cts-1 7m 366Mi ds-cts-2 8m 359Mi ds-idrepo-0 14m 10326Mi ds-idrepo-1 18m 10296Mi ds-idrepo-2 29m 10336Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 1290Mi idm-65858d8c4c-8ff69 8m 3448Mi lodemon-56989b88bb-nm2fw 3m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 98Mi 16:27:08 DEBUG --- stderr --- 16:27:08 DEBUG 16:27:08 INFO 16:27:08 INFO [loop_until]: kubectl --namespace=xlou top node 16:27:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:27:08 INFO [loop_until]: OK (rc = 0) 16:27:08 DEBUG --- stdout --- 16:27:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5255Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5500Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5374Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4740Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2095Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 2268Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1047Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1050Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 10950Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 76m 0% 10968Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 69m 0% 10910Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 64m 0% 1544Mi 2% 16:27:08 DEBUG --- stderr --- 16:27:08 DEBUG 16:27:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:27:10 WARNING Response is NONE 16:27:10 DEBUG Exception is preset. Setting retry_loop to true 16:27:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 16:27:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:27:12 WARNING Response is NONE 16:27:12 DEBUG Exception is preset. Setting retry_loop to true 16:27:12 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-29: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:27:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:27:15 WARNING Response is NONE 16:27:15 DEBUG Exception is preset. Setting retry_loop to true 16:27:15 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-21: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:27:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:27:17 WARNING Response is NONE 16:27:17 DEBUG Exception is preset. Setting retry_loop to true 16:27:17 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-14: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:27:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691767388 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 16:27:21 WARNING Response is NONE 16:27:21 DEBUG Exception is preset. Setting retry_loop to true 16:27:21 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-18: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 16:28:08 INFO 16:28:08 INFO [loop_until]: kubectl --namespace=xlou top pods 16:28:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:28:08 INFO [loop_until]: OK (rc = 0) 16:28:08 DEBUG --- stdout --- 16:28:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 87m 4378Mi am-55f77847b7-ngpns 8m 4325Mi am-55f77847b7-q6zcv 70m 4464Mi ds-cts-0 78m 396Mi ds-cts-1 69m 367Mi ds-cts-2 94m 360Mi ds-idrepo-0 13m 10327Mi ds-idrepo-1 116m 10302Mi ds-idrepo-2 17m 10337Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 1290Mi idm-65858d8c4c-8ff69 8m 3449Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 98Mi 16:28:08 DEBUG --- stderr --- 16:28:08 DEBUG 16:28:08 INFO 16:28:08 INFO [loop_until]: kubectl --namespace=xlou top node 16:28:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:28:08 INFO [loop_until]: OK (rc = 0) 16:28:08 DEBUG --- stdout --- 16:28:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1254Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 158m 0% 5335Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 129m 0% 5534Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 203m 1% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 4735Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2095Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 2252Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 142m 0% 1046Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1047Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 135m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 319m 2% 10948Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 77m 0% 10975Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 112m 0% 10946Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 78m 0% 1540Mi 2% 16:28:08 DEBUG --- stderr --- 16:28:08 DEBUG 16:29:08 INFO 16:29:08 INFO [loop_until]: kubectl --namespace=xlou top pods 16:29:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:29:08 INFO [loop_until]: OK (rc = 0) 16:29:08 DEBUG --- stdout --- 16:29:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 22m 4388Mi am-55f77847b7-ngpns 19m 4366Mi am-55f77847b7-q6zcv 11m 4446Mi ds-cts-0 338m 396Mi ds-cts-1 113m 367Mi ds-cts-2 156m 360Mi ds-idrepo-0 3119m 13338Mi ds-idrepo-1 306m 10303Mi ds-idrepo-2 254m 10344Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 8m 1309Mi idm-65858d8c4c-8ff69 13m 3470Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1110m 359Mi 16:29:08 DEBUG --- stderr --- 16:29:08 DEBUG 16:29:08 INFO 16:29:08 INFO [loop_until]: kubectl --namespace=xlou top node 16:29:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:29:08 INFO [loop_until]: OK (rc = 0) 16:29:08 DEBUG --- stdout --- 16:29:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 82m 0% 5346Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5519Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 4758Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2088Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2272Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 219m 1% 1049Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 211m 1% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 450m 2% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3213m 20% 13724Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 299m 1% 10977Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 316m 1% 10921Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1208m 7% 1800Mi 3% 16:29:08 DEBUG --- stderr --- 16:29:08 DEBUG 16:30:08 INFO 16:30:08 INFO [loop_until]: kubectl --namespace=xlou top pods 16:30:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:30:08 INFO 16:30:08 INFO [loop_until]: kubectl --namespace=xlou top node 16:30:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:30:08 INFO [loop_until]: OK (rc = 0) 16:30:08 DEBUG --- stdout --- 16:30:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1257Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 76m 0% 5348Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5516Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 4754Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2082Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2272Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1049Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1057Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2816m 17% 13920Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 78m 0% 10977Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 87m 0% 10918Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1214m 7% 1795Mi 3% 16:30:08 DEBUG --- stderr --- 16:30:08 DEBUG 16:30:08 INFO [loop_until]: OK (rc = 0) 16:30:08 DEBUG --- stdout --- 16:30:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 16m 4390Mi am-55f77847b7-ngpns 20m 4367Mi am-55f77847b7-q6zcv 10m 4447Mi ds-cts-0 7m 394Mi ds-cts-1 8m 373Mi ds-cts-2 8m 360Mi ds-idrepo-0 2802m 13316Mi ds-idrepo-1 32m 10301Mi ds-idrepo-2 17m 10344Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 8m 1309Mi idm-65858d8c4c-8ff69 9m 3471Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1107m 359Mi 16:30:08 DEBUG --- stderr --- 16:30:08 DEBUG 16:31:08 INFO 16:31:08 INFO 16:31:08 INFO [loop_until]: kubectl --namespace=xlou top node 16:31:08 INFO [loop_until]: kubectl --namespace=xlou top pods 16:31:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:31:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:31:09 INFO [loop_until]: OK (rc = 0) 16:31:09 DEBUG --- stdout --- 16:31:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 16m 4391Mi am-55f77847b7-ngpns 12m 4367Mi am-55f77847b7-q6zcv 9m 4446Mi ds-cts-0 8m 394Mi ds-cts-1 9m 373Mi ds-cts-2 8m 360Mi ds-idrepo-0 2968m 13373Mi ds-idrepo-1 33m 10309Mi ds-idrepo-2 24m 10344Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 8m 1309Mi idm-65858d8c4c-8ff69 9m 3471Mi lodemon-56989b88bb-nm2fw 1m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1256m 359Mi 16:31:09 DEBUG --- stderr --- 16:31:09 DEBUG 16:31:09 INFO [loop_until]: OK (rc = 0) 16:31:09 DEBUG --- stdout --- 16:31:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 5351Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5513Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5414Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 4771Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2099Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2271Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1048Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1056Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2986m 18% 14039Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 78m 0% 10981Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 84m 0% 10925Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1309m 8% 1793Mi 3% 16:31:09 DEBUG --- stderr --- 16:31:09 DEBUG 16:32:09 INFO 16:32:09 INFO [loop_until]: kubectl --namespace=xlou top node 16:32:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:32:09 INFO 16:32:09 INFO [loop_until]: kubectl --namespace=xlou top pods 16:32:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:32:09 INFO [loop_until]: OK (rc = 0) 16:32:09 DEBUG --- stdout --- 16:32:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1268Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5348Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 5517Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 4761Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2098Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2271Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1049Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1057Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2939m 18% 14033Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 79m 0% 10981Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 80m 0% 10927Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1389m 8% 1797Mi 3% 16:32:09 DEBUG --- stderr --- 16:32:09 DEBUG 16:32:09 INFO [loop_until]: OK (rc = 0) 16:32:09 DEBUG --- stdout --- 16:32:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 16m 4392Mi am-55f77847b7-ngpns 16m 4368Mi am-55f77847b7-q6zcv 13m 4446Mi ds-cts-0 7m 394Mi ds-cts-1 8m 373Mi ds-cts-2 8m 361Mi ds-idrepo-0 2901m 13508Mi ds-idrepo-1 26m 10312Mi ds-idrepo-2 23m 10345Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 7m 1309Mi idm-65858d8c4c-8ff69 9m 3473Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1303m 359Mi 16:32:09 DEBUG --- stderr --- 16:32:09 DEBUG 16:33:09 INFO 16:33:09 INFO [loop_until]: kubectl --namespace=xlou top node 16:33:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:33:09 INFO 16:33:09 INFO [loop_until]: kubectl --namespace=xlou top pods 16:33:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:33:09 INFO [loop_until]: OK (rc = 0) 16:33:09 DEBUG --- stdout --- 16:33:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 18m 4391Mi am-55f77847b7-ngpns 15m 4369Mi am-55f77847b7-q6zcv 9m 4448Mi ds-cts-0 9m 394Mi ds-cts-1 11m 373Mi ds-cts-2 9m 361Mi ds-idrepo-0 3272m 13680Mi ds-idrepo-1 21m 10313Mi ds-idrepo-2 20m 10345Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 7m 1309Mi idm-65858d8c4c-8ff69 9m 3473Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1371m 360Mi 16:33:09 DEBUG --- stderr --- 16:33:09 DEBUG 16:33:09 INFO [loop_until]: OK (rc = 0) 16:33:09 DEBUG --- stdout --- 16:33:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1256Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5350Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5515Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4763Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 119m 0% 2101Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2274Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1048Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3281m 20% 14202Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 84m 0% 10982Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 77m 0% 10932Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1459m 9% 1796Mi 3% 16:33:09 DEBUG --- stderr --- 16:33:09 DEBUG 16:34:09 INFO 16:34:09 INFO [loop_until]: kubectl --namespace=xlou top pods 16:34:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:34:09 INFO 16:34:09 INFO [loop_until]: kubectl --namespace=xlou top node 16:34:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:34:09 INFO [loop_until]: OK (rc = 0) 16:34:09 DEBUG --- stdout --- 16:34:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 13m 4392Mi am-55f77847b7-ngpns 28m 4370Mi am-55f77847b7-q6zcv 9m 4447Mi ds-cts-0 8m 394Mi ds-cts-1 8m 373Mi ds-cts-2 8m 361Mi ds-idrepo-0 12m 13680Mi ds-idrepo-1 20m 10314Mi ds-idrepo-2 16m 10348Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 7m 1310Mi idm-65858d8c4c-8ff69 10m 3473Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 98Mi 16:34:09 DEBUG --- stderr --- 16:34:09 DEBUG 16:34:09 INFO [loop_until]: OK (rc = 0) 16:34:09 DEBUG --- stdout --- 16:34:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1256Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5349Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5521Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 85m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 80m 0% 4762Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2107Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 2276Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1046Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1053Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14203Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 10985Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 73m 0% 10933Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 65m 0% 1539Mi 2% 16:34:09 DEBUG --- stderr --- 16:34:09 DEBUG 16:35:09 INFO 16:35:09 INFO [loop_until]: kubectl --namespace=xlou top node 16:35:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:35:09 INFO 16:35:09 INFO [loop_until]: kubectl --namespace=xlou top pods 16:35:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:35:09 INFO [loop_until]: OK (rc = 0) 16:35:09 DEBUG --- stdout --- 16:35:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 12m 4392Mi am-55f77847b7-ngpns 22m 4370Mi am-55f77847b7-q6zcv 10m 4448Mi ds-cts-0 8m 394Mi ds-cts-1 9m 373Mi ds-cts-2 7m 361Mi ds-idrepo-0 15m 13680Mi ds-idrepo-1 2793m 12594Mi ds-idrepo-2 21m 10350Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 7m 1316Mi idm-65858d8c4c-8ff69 9m 3474Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1054m 376Mi 16:35:09 DEBUG --- stderr --- 16:35:09 DEBUG 16:35:09 INFO [loop_until]: OK (rc = 0) 16:35:09 DEBUG --- stdout --- 16:35:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1254Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5347Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5518Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 4759Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 119m 0% 2107Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2281Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1049Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1053Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14204Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 69m 0% 10984Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 2888m 18% 13137Mi 22% gke-xlou-cdm-frontend-a8771548-k40m 1145m 7% 1814Mi 3% 16:35:09 DEBUG --- stderr --- 16:35:09 DEBUG 16:36:09 INFO 16:36:09 INFO [loop_until]: kubectl --namespace=xlou top node 16:36:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:36:09 INFO 16:36:09 INFO [loop_until]: kubectl --namespace=xlou top pods 16:36:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:36:09 INFO [loop_until]: OK (rc = 0) 16:36:09 DEBUG --- stdout --- 16:36:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1256Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 90m 0% 5356Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5518Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5420Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4764Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2107Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2280Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1044Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1052Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14203Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 71m 0% 10987Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 3093m 19% 13913Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1254m 7% 1814Mi 3% 16:36:09 DEBUG --- stderr --- 16:36:09 DEBUG 16:36:09 INFO [loop_until]: OK (rc = 0) 16:36:09 DEBUG --- stdout --- 16:36:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 34m 4398Mi am-55f77847b7-ngpns 16m 4371Mi am-55f77847b7-q6zcv 9m 4449Mi ds-cts-0 12m 394Mi ds-cts-1 8m 373Mi ds-cts-2 8m 361Mi ds-idrepo-0 15m 13680Mi ds-idrepo-1 3012m 13381Mi ds-idrepo-2 21m 10352Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 1316Mi idm-65858d8c4c-8ff69 7m 3474Mi lodemon-56989b88bb-nm2fw 5m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1160m 376Mi 16:36:09 DEBUG --- stderr --- 16:36:09 DEBUG 16:37:09 INFO 16:37:09 INFO [loop_until]: kubectl --namespace=xlou top node 16:37:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:37:09 INFO 16:37:09 INFO [loop_until]: kubectl --namespace=xlou top pods 16:37:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:37:09 INFO [loop_until]: OK (rc = 0) 16:37:09 DEBUG --- stdout --- 16:37:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 11m 4399Mi am-55f77847b7-ngpns 10m 4372Mi am-55f77847b7-q6zcv 7m 4449Mi ds-cts-0 14m 395Mi ds-cts-1 9m 374Mi ds-cts-2 8m 361Mi ds-idrepo-0 12m 13680Mi ds-idrepo-1 3018m 13372Mi ds-idrepo-2 18m 10354Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 7m 1316Mi idm-65858d8c4c-8ff69 10m 3474Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1169m 379Mi 16:37:09 DEBUG --- stderr --- 16:37:09 DEBUG 16:37:09 INFO [loop_until]: OK (rc = 0) 16:37:09 DEBUG --- stdout --- 16:37:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1254Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5354Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5514Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5420Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4765Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2109Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 66m 0% 2282Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1045Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1055Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14204Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 10991Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 3073m 19% 13905Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1283m 8% 1815Mi 3% 16:37:09 DEBUG --- stderr --- 16:37:09 DEBUG 16:38:09 INFO 16:38:09 INFO [loop_until]: kubectl --namespace=xlou top pods 16:38:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:38:09 INFO 16:38:09 INFO [loop_until]: kubectl --namespace=xlou top node 16:38:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:38:09 INFO [loop_until]: OK (rc = 0) 16:38:09 DEBUG --- stdout --- 16:38:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 13m 4400Mi am-55f77847b7-ngpns 8m 4372Mi am-55f77847b7-q6zcv 12m 4449Mi ds-cts-0 6m 394Mi ds-cts-1 9m 374Mi ds-cts-2 15m 360Mi ds-idrepo-0 24m 13680Mi ds-idrepo-1 3268m 13382Mi ds-idrepo-2 26m 10354Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 9m 1316Mi idm-65858d8c4c-8ff69 13m 3474Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1267m 380Mi 16:38:09 DEBUG --- stderr --- 16:38:09 DEBUG 16:38:09 INFO [loop_until]: OK (rc = 0) 16:38:09 DEBUG --- stdout --- 16:38:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5357Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5519Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5419Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 4763Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2105Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2282Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 1043Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1055Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 67m 0% 14202Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 78m 0% 10990Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 3404m 21% 13920Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1344m 8% 1815Mi 3% 16:38:09 DEBUG --- stderr --- 16:38:09 DEBUG 16:39:09 INFO 16:39:09 INFO [loop_until]: kubectl --namespace=xlou top pods 16:39:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:39:09 INFO 16:39:09 INFO [loop_until]: kubectl --namespace=xlou top node 16:39:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:39:09 INFO [loop_until]: OK (rc = 0) 16:39:09 DEBUG --- stdout --- 16:39:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 11m 4401Mi am-55f77847b7-ngpns 15m 4373Mi am-55f77847b7-q6zcv 8m 4450Mi ds-cts-0 7m 395Mi ds-cts-1 8m 375Mi ds-cts-2 8m 360Mi ds-idrepo-0 11m 13680Mi ds-idrepo-1 3421m 13602Mi ds-idrepo-2 15m 10355Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 1317Mi idm-65858d8c4c-8ff69 7m 3474Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1358m 380Mi 16:39:09 DEBUG --- stderr --- 16:39:09 DEBUG 16:39:09 INFO [loop_until]: OK (rc = 0) 16:39:09 DEBUG --- stdout --- 16:39:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1256Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5360Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5518Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 5420Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 4762Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2107Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2286Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1045Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1056Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14203Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 72m 0% 10993Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 3577m 22% 14153Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1445m 9% 1818Mi 3% 16:39:09 DEBUG --- stderr --- 16:39:09 DEBUG 16:40:09 INFO 16:40:09 INFO [loop_until]: kubectl --namespace=xlou top pods 16:40:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:40:09 INFO 16:40:09 INFO [loop_until]: kubectl --namespace=xlou top node 16:40:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:40:10 INFO [loop_until]: OK (rc = 0) 16:40:10 DEBUG --- stdout --- 16:40:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4401Mi am-55f77847b7-ngpns 10m 4373Mi am-55f77847b7-q6zcv 8m 4450Mi ds-cts-0 7m 395Mi ds-cts-1 7m 374Mi ds-cts-2 8m 360Mi ds-idrepo-0 12m 13680Mi ds-idrepo-1 18m 13633Mi ds-idrepo-2 13m 10357Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 7m 1317Mi idm-65858d8c4c-8ff69 6m 3475Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 98Mi 16:40:10 DEBUG --- stderr --- 16:40:10 DEBUG 16:40:10 INFO [loop_until]: OK (rc = 0) 16:40:10 DEBUG --- stdout --- 16:40:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 5359Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5521Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5422Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 4762Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 119m 0% 2106Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2284Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1046Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14206Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 10995Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 70m 0% 14156Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 70m 0% 1541Mi 2% 16:40:10 DEBUG --- stderr --- 16:40:10 DEBUG 16:41:10 INFO 16:41:10 INFO [loop_until]: kubectl --namespace=xlou top pods 16:41:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:41:10 INFO 16:41:10 INFO [loop_until]: kubectl --namespace=xlou top node 16:41:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:41:10 INFO [loop_until]: OK (rc = 0) 16:41:10 DEBUG --- stdout --- 16:41:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 14m 4401Mi am-55f77847b7-ngpns 9m 4373Mi am-55f77847b7-q6zcv 11m 4450Mi ds-cts-0 7m 395Mi ds-cts-1 6m 374Mi ds-cts-2 8m 360Mi ds-idrepo-0 15m 13680Mi ds-idrepo-1 22m 13633Mi ds-idrepo-2 2287m 11870Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 1317Mi idm-65858d8c4c-8ff69 7m 3475Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1233m 357Mi 16:41:10 DEBUG --- stderr --- 16:41:10 DEBUG 16:41:10 INFO [loop_until]: OK (rc = 0) 16:41:10 DEBUG --- stdout --- 16:41:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1255Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 5355Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5521Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5434Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4762Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2108Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2283Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1048Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1056Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 14206Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2491m 15% 12467Mi 21% gke-xlou-cdm-ds-32e4dcb1-x4wx 78m 0% 14157Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1249m 7% 1795Mi 3% 16:41:10 DEBUG --- stderr --- 16:41:10 DEBUG 16:42:10 INFO 16:42:10 INFO [loop_until]: kubectl --namespace=xlou top pods 16:42:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:42:10 INFO 16:42:10 INFO [loop_until]: kubectl --namespace=xlou top node 16:42:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:42:10 INFO [loop_until]: OK (rc = 0) 16:42:10 DEBUG --- stdout --- 16:42:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 10m 4401Mi am-55f77847b7-ngpns 8m 4373Mi am-55f77847b7-q6zcv 8m 4449Mi ds-cts-0 7m 395Mi ds-cts-1 7m 374Mi ds-cts-2 7m 360Mi ds-idrepo-0 15m 13680Mi ds-idrepo-1 14m 13633Mi ds-idrepo-2 2628m 13365Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 8m 1318Mi idm-65858d8c4c-8ff69 7m 3475Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1197m 356Mi 16:42:10 DEBUG --- stderr --- 16:42:10 DEBUG 16:42:10 INFO [loop_until]: OK (rc = 0) 16:42:10 DEBUG --- stdout --- 16:42:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5359Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5521Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5423Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 68m 0% 4764Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2107Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2282Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1048Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 72m 0% 14209Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2697m 16% 13917Mi 23% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 14157Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1268m 7% 1794Mi 3% 16:42:10 DEBUG --- stderr --- 16:42:10 DEBUG 16:43:10 INFO 16:43:10 INFO [loop_until]: kubectl --namespace=xlou top pods 16:43:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:43:10 INFO 16:43:10 INFO [loop_until]: kubectl --namespace=xlou top node 16:43:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:43:10 INFO [loop_until]: OK (rc = 0) 16:43:10 DEBUG --- stdout --- 16:43:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 11m 4396Mi am-55f77847b7-ngpns 10m 4373Mi am-55f77847b7-q6zcv 8m 4449Mi ds-cts-0 7m 396Mi ds-cts-1 8m 375Mi ds-cts-2 8m 360Mi ds-idrepo-0 14m 13680Mi ds-idrepo-1 24m 13634Mi ds-idrepo-2 2611m 13332Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 1318Mi idm-65858d8c4c-8ff69 7m 3475Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1262m 364Mi 16:43:10 DEBUG --- stderr --- 16:43:10 DEBUG 16:43:10 INFO [loop_until]: OK (rc = 0) 16:43:10 DEBUG --- stdout --- 16:43:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5354Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5523Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5424Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 4766Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2109Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2282Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1046Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 69m 0% 14209Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2740m 17% 13896Mi 23% gke-xlou-cdm-ds-32e4dcb1-x4wx 84m 0% 14157Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1328m 8% 1803Mi 3% 16:43:10 DEBUG --- stderr --- 16:43:10 DEBUG 16:44:10 INFO 16:44:10 INFO [loop_until]: kubectl --namespace=xlou top pods 16:44:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:44:10 INFO 16:44:10 INFO [loop_until]: kubectl --namespace=xlou top node 16:44:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:44:10 INFO [loop_until]: OK (rc = 0) 16:44:10 DEBUG --- stdout --- 16:44:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 10m 4397Mi am-55f77847b7-ngpns 9m 4376Mi am-55f77847b7-q6zcv 12m 4455Mi ds-cts-0 7m 395Mi ds-cts-1 7m 375Mi ds-cts-2 8m 362Mi ds-idrepo-0 14m 13680Mi ds-idrepo-1 12m 13634Mi ds-idrepo-2 2813m 13348Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 12m 1318Mi idm-65858d8c4c-8ff69 11m 3475Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1318m 364Mi 16:44:10 DEBUG --- stderr --- 16:44:10 DEBUG 16:44:10 INFO [loop_until]: OK (rc = 0) 16:44:10 DEBUG --- stdout --- 16:44:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1256Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5354Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5524Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5424Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 4764Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2125Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2283Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1050Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 70m 0% 14206Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 3098m 19% 14037Mi 23% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14160Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1372m 8% 1804Mi 3% 16:44:10 DEBUG --- stderr --- 16:44:10 DEBUG 16:45:10 INFO 16:45:10 INFO [loop_until]: kubectl --namespace=xlou top pods 16:45:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:45:10 INFO 16:45:10 INFO [loop_until]: kubectl --namespace=xlou top node 16:45:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:45:10 INFO [loop_until]: OK (rc = 0) 16:45:10 DEBUG --- stdout --- 16:45:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4398Mi am-55f77847b7-ngpns 8m 4377Mi am-55f77847b7-q6zcv 9m 4456Mi ds-cts-0 7m 395Mi ds-cts-1 5m 375Mi ds-cts-2 7m 362Mi ds-idrepo-0 14m 13681Mi ds-idrepo-1 22m 13635Mi ds-idrepo-2 2929m 13565Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 10m 1318Mi idm-65858d8c4c-8ff69 11m 3475Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1414m 364Mi 16:45:10 DEBUG --- stderr --- 16:45:10 DEBUG 16:45:10 INFO [loop_until]: OK (rc = 0) 16:45:10 DEBUG --- stdout --- 16:45:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5354Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5525Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5423Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4764Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2115Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2282Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1051Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 67m 0% 14208Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 3004m 18% 14110Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 75m 0% 14150Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1478m 9% 1804Mi 3% 16:45:10 DEBUG --- stderr --- 16:45:10 DEBUG 16:46:10 INFO 16:46:10 INFO [loop_until]: kubectl --namespace=xlou top pods 16:46:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:46:10 INFO 16:46:10 INFO [loop_until]: kubectl --namespace=xlou top node 16:46:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:46:10 INFO [loop_until]: OK (rc = 0) 16:46:10 DEBUG --- stdout --- 16:46:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 10m 4400Mi am-55f77847b7-ngpns 7m 4377Mi am-55f77847b7-q6zcv 8m 4456Mi ds-cts-0 10m 396Mi ds-cts-1 9m 375Mi ds-cts-2 8m 362Mi ds-idrepo-0 12m 13680Mi ds-idrepo-1 14m 13626Mi ds-idrepo-2 456m 13566Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 8m 1319Mi idm-65858d8c4c-8ff69 11m 3476Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 484m 98Mi 16:46:10 DEBUG --- stderr --- 16:46:10 DEBUG 16:46:10 INFO [loop_until]: OK (rc = 0) 16:46:10 DEBUG --- stdout --- 16:46:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1255Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5368Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 59m 0% 5526Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5426Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4763Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2117Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2282Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1050Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1057Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 14209Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14109Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14153Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 231m 1% 1540Mi 2% 16:46:10 DEBUG --- stderr --- 16:46:10 DEBUG 16:47:10 INFO 16:47:10 INFO [loop_until]: kubectl --namespace=xlou top pods 16:47:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:47:10 INFO 16:47:10 INFO [loop_until]: kubectl --namespace=xlou top node 16:47:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:47:10 INFO [loop_until]: OK (rc = 0) 16:47:10 DEBUG --- stdout --- 16:47:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4400Mi am-55f77847b7-ngpns 9m 4383Mi am-55f77847b7-q6zcv 19m 4456Mi ds-cts-0 7m 395Mi ds-cts-1 5m 375Mi ds-cts-2 8m 362Mi ds-idrepo-0 12m 13680Mi ds-idrepo-1 22m 13629Mi ds-idrepo-2 10m 13566Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 1320Mi idm-65858d8c4c-8ff69 7m 3476Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1675m 412Mi 16:47:10 DEBUG --- stderr --- 16:47:10 DEBUG 16:47:10 INFO [loop_until]: OK (rc = 0) 16:47:10 DEBUG --- stdout --- 16:47:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5360Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 5528Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5431Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4765Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2117Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2285Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1045Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 50m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14210Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14109Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 69m 0% 14155Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1917m 12% 1861Mi 3% 16:47:10 DEBUG --- stderr --- 16:47:10 DEBUG 16:48:10 INFO 16:48:10 INFO [loop_until]: kubectl --namespace=xlou top pods 16:48:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:48:10 INFO 16:48:10 INFO [loop_until]: kubectl --namespace=xlou top node 16:48:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:48:10 INFO [loop_until]: OK (rc = 0) 16:48:10 DEBUG --- stdout --- 16:48:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 10m 4406Mi am-55f77847b7-ngpns 10m 4436Mi am-55f77847b7-q6zcv 8m 4462Mi ds-cts-0 7m 396Mi ds-cts-1 6m 376Mi ds-cts-2 7m 364Mi ds-idrepo-0 472m 13681Mi ds-idrepo-1 16m 13630Mi ds-idrepo-2 21m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 563m 3350Mi idm-65858d8c4c-8ff69 484m 3507Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 119m 487Mi 16:48:10 DEBUG --- stderr --- 16:48:10 DEBUG 16:48:10 INFO [loop_until]: OK (rc = 0) 16:48:10 DEBUG --- stdout --- 16:48:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1255Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5366Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5532Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5486Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 533m 3% 4798Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 161m 1% 2121Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 611m 3% 4312Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1049Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 511m 3% 14209Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 72m 0% 14109Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 73m 0% 14156Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 186m 1% 1960Mi 3% 16:48:10 DEBUG --- stderr --- 16:48:10 DEBUG 16:49:10 INFO 16:49:10 INFO [loop_until]: kubectl --namespace=xlou top pods 16:49:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:49:11 INFO [loop_until]: OK (rc = 0) 16:49:11 DEBUG --- stdout --- 16:49:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 15m 4418Mi am-55f77847b7-ngpns 21m 4434Mi am-55f77847b7-q6zcv 11m 4462Mi ds-cts-0 7m 397Mi ds-cts-1 7m 376Mi ds-cts-2 7m 363Mi ds-idrepo-0 451m 13694Mi ds-idrepo-1 15m 13631Mi ds-idrepo-2 13m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 480m 3360Mi idm-65858d8c4c-8ff69 328m 3509Mi lodemon-56989b88bb-nm2fw 5m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 102m 491Mi 16:49:11 DEBUG --- stderr --- 16:49:11 DEBUG 16:49:11 INFO 16:49:11 INFO [loop_until]: kubectl --namespace=xlou top node 16:49:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:49:11 INFO [loop_until]: OK (rc = 0) 16:49:11 DEBUG --- stdout --- 16:49:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1255Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5365Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5531Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 80m 0% 5483Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 334m 2% 4795Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 156m 0% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 565m 3% 4325Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1050Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1057Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 508m 3% 14223Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14112Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14159Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 165m 1% 1963Mi 3% 16:49:11 DEBUG --- stderr --- 16:49:11 DEBUG 16:50:11 INFO 16:50:11 INFO [loop_until]: kubectl --namespace=xlou top pods 16:50:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:50:11 INFO [loop_until]: OK (rc = 0) 16:50:11 DEBUG --- stdout --- 16:50:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4406Mi am-55f77847b7-ngpns 10m 4434Mi am-55f77847b7-q6zcv 8m 4462Mi ds-cts-0 6m 396Mi ds-cts-1 5m 377Mi ds-cts-2 8m 363Mi ds-idrepo-0 468m 13716Mi ds-idrepo-1 17m 13631Mi ds-idrepo-2 11m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 463m 3356Mi idm-65858d8c4c-8ff69 402m 3511Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 77m 493Mi 16:50:11 DEBUG --- stderr --- 16:50:11 DEBUG 16:50:11 INFO 16:50:11 INFO [loop_until]: kubectl --namespace=xlou top node 16:50:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:50:11 INFO [loop_until]: OK (rc = 0) 16:50:11 DEBUG --- stdout --- 16:50:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5362Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5533Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5481Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 536m 3% 4794Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 159m 1% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 530m 3% 4322Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1048Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 51m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 565m 3% 14240Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14110Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 70m 0% 14159Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 147m 0% 1966Mi 3% 16:50:11 DEBUG --- stderr --- 16:50:11 DEBUG 16:51:11 INFO 16:51:11 INFO [loop_until]: kubectl --namespace=xlou top pods 16:51:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:51:11 INFO [loop_until]: OK (rc = 0) 16:51:11 DEBUG --- stdout --- 16:51:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4407Mi am-55f77847b7-ngpns 9m 4434Mi am-55f77847b7-q6zcv 16m 4462Mi ds-cts-0 9m 396Mi ds-cts-1 5m 377Mi ds-cts-2 8m 363Mi ds-idrepo-0 385m 13718Mi ds-idrepo-1 25m 13627Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 510m 3364Mi idm-65858d8c4c-8ff69 274m 3509Mi lodemon-56989b88bb-nm2fw 3m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 72m 494Mi 16:51:11 DEBUG --- stderr --- 16:51:11 DEBUG 16:51:11 INFO 16:51:11 INFO [loop_until]: kubectl --namespace=xlou top node 16:51:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:51:11 INFO [loop_until]: OK (rc = 0) 16:51:11 DEBUG --- stdout --- 16:51:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1257Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5363Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5530Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5482Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 389m 2% 4795Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 159m 1% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 598m 3% 4327Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1051Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 49m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 508m 3% 14242Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14110Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 77m 0% 14154Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 137m 0% 1969Mi 3% 16:51:11 DEBUG --- stderr --- 16:51:11 DEBUG 16:52:11 INFO 16:52:11 INFO [loop_until]: kubectl --namespace=xlou top pods 16:52:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:52:11 INFO [loop_until]: OK (rc = 0) 16:52:11 DEBUG --- stdout --- 16:52:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4407Mi am-55f77847b7-ngpns 12m 4434Mi am-55f77847b7-q6zcv 21m 4463Mi ds-cts-0 7m 398Mi ds-cts-1 5m 376Mi ds-cts-2 7m 363Mi ds-idrepo-0 383m 13719Mi ds-idrepo-1 19m 13629Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 421m 3365Mi idm-65858d8c4c-8ff69 401m 3510Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 79m 495Mi 16:52:11 DEBUG --- stderr --- 16:52:11 DEBUG 16:52:11 INFO 16:52:11 INFO [loop_until]: kubectl --namespace=xlou top node 16:52:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:52:11 INFO [loop_until]: OK (rc = 0) 16:52:11 DEBUG --- stdout --- 16:52:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5363Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 5534Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5484Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 486m 3% 4797Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 158m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 437m 2% 4330Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1052Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1057Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 480m 3% 14249Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14112Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 74m 0% 14157Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 146m 0% 1967Mi 3% 16:52:11 DEBUG --- stderr --- 16:52:11 DEBUG 16:53:11 INFO 16:53:11 INFO [loop_until]: kubectl --namespace=xlou top pods 16:53:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:53:11 INFO [loop_until]: OK (rc = 0) 16:53:11 DEBUG --- stdout --- 16:53:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 10m 4407Mi am-55f77847b7-ngpns 8m 4435Mi am-55f77847b7-q6zcv 15m 4463Mi ds-cts-0 7m 396Mi ds-cts-1 13m 376Mi ds-cts-2 8m 364Mi ds-idrepo-0 436m 13719Mi ds-idrepo-1 15m 13630Mi ds-idrepo-2 12m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 420m 3372Mi idm-65858d8c4c-8ff69 351m 3510Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 63m 495Mi 16:53:11 DEBUG --- stderr --- 16:53:11 DEBUG 16:53:11 INFO 16:53:11 INFO [loop_until]: kubectl --namespace=xlou top node 16:53:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:53:11 INFO [loop_until]: OK (rc = 0) 16:53:11 DEBUG --- stdout --- 16:53:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 5363Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5535Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5483Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 442m 2% 4796Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 158m 0% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 475m 2% 4335Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1050Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 482m 3% 14246Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14113Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 71m 0% 14157Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 131m 0% 1967Mi 3% 16:53:11 DEBUG --- stderr --- 16:53:11 DEBUG 16:54:11 INFO 16:54:11 INFO [loop_until]: kubectl --namespace=xlou top pods 16:54:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:54:11 INFO [loop_until]: OK (rc = 0) 16:54:11 DEBUG --- stdout --- 16:54:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4407Mi am-55f77847b7-ngpns 8m 4434Mi am-55f77847b7-q6zcv 7m 4463Mi ds-cts-0 7m 397Mi ds-cts-1 6m 376Mi ds-cts-2 7m 364Mi ds-idrepo-0 450m 13719Mi ds-idrepo-1 19m 13631Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 460m 3373Mi idm-65858d8c4c-8ff69 369m 3511Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 59m 496Mi 16:54:11 DEBUG --- stderr --- 16:54:11 DEBUG 16:54:11 INFO 16:54:11 INFO [loop_until]: kubectl --namespace=xlou top node 16:54:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:54:11 INFO [loop_until]: OK (rc = 0) 16:54:11 DEBUG --- stdout --- 16:54:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5364Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 5535Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5483Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 505m 3% 4797Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 158m 0% 2136Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 528m 3% 4333Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1050Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 519m 3% 14245Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14115Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 70m 0% 14154Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 121m 0% 1968Mi 3% 16:54:11 DEBUG --- stderr --- 16:54:11 DEBUG 16:55:11 INFO 16:55:11 INFO [loop_until]: kubectl --namespace=xlou top pods 16:55:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:55:11 INFO [loop_until]: OK (rc = 0) 16:55:11 DEBUG --- stdout --- 16:55:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4407Mi am-55f77847b7-ngpns 8m 4434Mi am-55f77847b7-q6zcv 8m 4463Mi ds-cts-0 6m 397Mi ds-cts-1 5m 376Mi ds-cts-2 6m 363Mi ds-idrepo-0 541m 13719Mi ds-idrepo-1 15m 13631Mi ds-idrepo-2 17m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 526m 3374Mi idm-65858d8c4c-8ff69 436m 3513Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 67m 496Mi 16:55:11 DEBUG --- stderr --- 16:55:11 DEBUG 16:55:11 INFO 16:55:11 INFO [loop_until]: kubectl --namespace=xlou top node 16:55:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:55:11 INFO [loop_until]: OK (rc = 0) 16:55:11 DEBUG --- stdout --- 16:55:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5366Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5533Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5483Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 496m 3% 4798Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 161m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 620m 3% 4336Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1052Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 49m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 571m 3% 14247Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 14115Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 69m 0% 14154Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 134m 0% 1967Mi 3% 16:55:11 DEBUG --- stderr --- 16:55:11 DEBUG 16:56:11 INFO 16:56:11 INFO [loop_until]: kubectl --namespace=xlou top pods 16:56:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:56:11 INFO [loop_until]: OK (rc = 0) 16:56:11 DEBUG --- stdout --- 16:56:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4407Mi am-55f77847b7-ngpns 9m 4434Mi am-55f77847b7-q6zcv 11m 4464Mi ds-cts-0 8m 397Mi ds-cts-1 5m 376Mi ds-cts-2 9m 364Mi ds-idrepo-0 554m 13738Mi ds-idrepo-1 17m 13630Mi ds-idrepo-2 17m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 484m 3373Mi idm-65858d8c4c-8ff69 349m 3513Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 67m 496Mi 16:56:11 DEBUG --- stderr --- 16:56:11 DEBUG 16:56:11 INFO 16:56:11 INFO [loop_until]: kubectl --namespace=xlou top node 16:56:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:56:11 INFO [loop_until]: OK (rc = 0) 16:56:11 DEBUG --- stdout --- 16:56:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5366Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5536Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5485Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 412m 2% 4798Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 168m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 553m 3% 4336Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1046Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 51m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 557m 3% 14248Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14112Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 69m 0% 14159Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 138m 0% 1967Mi 3% 16:56:11 DEBUG --- stderr --- 16:56:11 DEBUG 16:57:11 INFO 16:57:11 INFO [loop_until]: kubectl --namespace=xlou top pods 16:57:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:57:11 INFO [loop_until]: OK (rc = 0) 16:57:11 DEBUG --- stdout --- 16:57:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 10m 4407Mi am-55f77847b7-ngpns 8m 4435Mi am-55f77847b7-q6zcv 20m 4464Mi ds-cts-0 7m 397Mi ds-cts-1 8m 376Mi ds-cts-2 8m 364Mi ds-idrepo-0 540m 13786Mi ds-idrepo-1 14m 13630Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 505m 3375Mi idm-65858d8c4c-8ff69 394m 3513Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 93m 497Mi 16:57:11 DEBUG --- stderr --- 16:57:11 DEBUG 16:57:12 INFO 16:57:12 INFO [loop_until]: kubectl --namespace=xlou top node 16:57:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:57:12 INFO [loop_until]: OK (rc = 0) 16:57:12 DEBUG --- stdout --- 16:57:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5367Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5534Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5483Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 455m 2% 4797Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 163m 1% 2136Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 584m 3% 4336Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1050Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1056Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 618m 3% 14313Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14114Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 69m 0% 14166Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 143m 0% 1970Mi 3% 16:57:12 DEBUG --- stderr --- 16:57:12 DEBUG 16:58:11 INFO 16:58:11 INFO [loop_until]: kubectl --namespace=xlou top pods 16:58:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:58:11 INFO [loop_until]: OK (rc = 0) 16:58:11 DEBUG --- stdout --- 16:58:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 12m 4413Mi am-55f77847b7-ngpns 9m 4435Mi am-55f77847b7-q6zcv 8m 4465Mi ds-cts-0 6m 397Mi ds-cts-1 8m 377Mi ds-cts-2 8m 364Mi ds-idrepo-0 579m 13787Mi ds-idrepo-1 16m 13630Mi ds-idrepo-2 18m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 382m 3377Mi idm-65858d8c4c-8ff69 448m 3515Mi lodemon-56989b88bb-nm2fw 5m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 64m 497Mi 16:58:11 DEBUG --- stderr --- 16:58:11 DEBUG 16:58:12 INFO 16:58:12 INFO [loop_until]: kubectl --namespace=xlou top node 16:58:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:58:12 INFO [loop_until]: OK (rc = 0) 16:58:12 DEBUG --- stdout --- 16:58:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5372Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5536Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5484Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 511m 3% 4800Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 162m 1% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 434m 2% 4337Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 606m 3% 14312Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 72m 0% 14115Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 70m 0% 14157Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 133m 0% 1967Mi 3% 16:58:12 DEBUG --- stderr --- 16:58:12 DEBUG 16:59:11 INFO 16:59:11 INFO [loop_until]: kubectl --namespace=xlou top pods 16:59:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:59:11 INFO [loop_until]: OK (rc = 0) 16:59:11 DEBUG --- stdout --- 16:59:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 19m 4413Mi am-55f77847b7-ngpns 8m 4435Mi am-55f77847b7-q6zcv 11m 4465Mi ds-cts-0 7m 397Mi ds-cts-1 11m 377Mi ds-cts-2 6m 364Mi ds-idrepo-0 538m 13789Mi ds-idrepo-1 14m 13630Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 406m 3377Mi idm-65858d8c4c-8ff69 399m 3515Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 54m 498Mi 16:59:11 DEBUG --- stderr --- 16:59:11 DEBUG 16:59:12 INFO 16:59:12 INFO [loop_until]: kubectl --namespace=xlou top node 16:59:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:59:12 INFO [loop_until]: OK (rc = 0) 16:59:12 DEBUG --- stdout --- 16:59:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 5372Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5537Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5485Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 456m 2% 4802Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 153m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 502m 3% 4337Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1051Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 611m 3% 14315Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14116Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14159Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 125m 0% 1969Mi 3% 16:59:12 DEBUG --- stderr --- 16:59:12 DEBUG 17:00:12 INFO 17:00:12 INFO [loop_until]: kubectl --namespace=xlou top pods 17:00:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:00:12 INFO [loop_until]: OK (rc = 0) 17:00:12 DEBUG --- stdout --- 17:00:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 25m 4413Mi am-55f77847b7-ngpns 9m 4436Mi am-55f77847b7-q6zcv 8m 4465Mi ds-cts-0 6m 397Mi ds-cts-1 7m 377Mi ds-cts-2 8m 364Mi ds-idrepo-0 569m 13790Mi ds-idrepo-1 14m 13630Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 533m 3379Mi idm-65858d8c4c-8ff69 296m 3515Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 54m 498Mi 17:00:12 DEBUG --- stderr --- 17:00:12 DEBUG 17:00:12 INFO 17:00:12 INFO [loop_until]: kubectl --namespace=xlou top node 17:00:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:00:12 INFO [loop_until]: OK (rc = 0) 17:00:12 DEBUG --- stdout --- 17:00:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 5371Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5535Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5481Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 335m 2% 4803Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 158m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 710m 4% 4341Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1053Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 547m 3% 14310Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14115Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 73m 0% 14162Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 134m 0% 1981Mi 3% 17:00:12 DEBUG --- stderr --- 17:00:12 DEBUG 17:01:12 INFO 17:01:12 INFO [loop_until]: kubectl --namespace=xlou top pods 17:01:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:01:12 INFO [loop_until]: OK (rc = 0) 17:01:12 DEBUG --- stdout --- 17:01:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 16m 4418Mi am-55f77847b7-ngpns 9m 4437Mi am-55f77847b7-q6zcv 10m 4465Mi ds-cts-0 7m 397Mi ds-cts-1 6m 377Mi ds-cts-2 7m 364Mi ds-idrepo-0 590m 13790Mi ds-idrepo-1 21m 13634Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 531m 3379Mi idm-65858d8c4c-8ff69 367m 3515Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 73m 499Mi 17:01:12 DEBUG --- stderr --- 17:01:12 DEBUG 17:01:12 INFO 17:01:12 INFO [loop_until]: kubectl --namespace=xlou top node 17:01:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:01:12 INFO [loop_until]: OK (rc = 0) 17:01:12 DEBUG --- stdout --- 17:01:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5376Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5537Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5484Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 418m 2% 4801Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 157m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 566m 3% 4339Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1053Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 631m 3% 14316Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14119Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 14161Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 136m 0% 1971Mi 3% 17:01:12 DEBUG --- stderr --- 17:01:12 DEBUG 17:02:12 INFO 17:02:12 INFO [loop_until]: kubectl --namespace=xlou top pods 17:02:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:02:12 INFO [loop_until]: OK (rc = 0) 17:02:12 DEBUG --- stdout --- 17:02:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4420Mi am-55f77847b7-ngpns 17m 4437Mi am-55f77847b7-q6zcv 9m 4465Mi ds-cts-0 8m 397Mi ds-cts-1 9m 377Mi ds-cts-2 7m 364Mi ds-idrepo-0 450m 13790Mi ds-idrepo-1 15m 13632Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 403m 3380Mi idm-65858d8c4c-8ff69 353m 3511Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 66m 503Mi 17:02:12 DEBUG --- stderr --- 17:02:12 DEBUG 17:02:12 INFO 17:02:12 INFO [loop_until]: kubectl --namespace=xlou top node 17:02:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:02:12 INFO [loop_until]: OK (rc = 0) 17:02:12 DEBUG --- stdout --- 17:02:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5378Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 5486Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 415m 2% 4798Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 156m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 516m 3% 4350Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1050Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 575m 3% 14315Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14117Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14158Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 133m 0% 1973Mi 3% 17:02:12 DEBUG --- stderr --- 17:02:12 DEBUG 17:03:12 INFO 17:03:12 INFO [loop_until]: kubectl --namespace=xlou top pods 17:03:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:03:12 INFO [loop_until]: OK (rc = 0) 17:03:12 DEBUG --- stdout --- 17:03:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4421Mi am-55f77847b7-ngpns 8m 4438Mi am-55f77847b7-q6zcv 7m 4465Mi ds-cts-0 8m 398Mi ds-cts-1 9m 377Mi ds-cts-2 7m 364Mi ds-idrepo-0 501m 13810Mi ds-idrepo-1 14m 13631Mi ds-idrepo-2 10m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 461m 3382Mi idm-65858d8c4c-8ff69 439m 3515Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 70m 504Mi 17:03:12 DEBUG --- stderr --- 17:03:12 DEBUG 17:03:12 INFO 17:03:12 INFO [loop_until]: kubectl --namespace=xlou top node 17:03:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:03:12 INFO [loop_until]: OK (rc = 0) 17:03:12 DEBUG --- stdout --- 17:03:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5380Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5537Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5487Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 445m 2% 4804Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 156m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 507m 3% 4345Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1053Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 589m 3% 14331Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14116Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 14156Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 139m 0% 1973Mi 3% 17:03:12 DEBUG --- stderr --- 17:03:12 DEBUG 17:04:12 INFO 17:04:12 INFO [loop_until]: kubectl --namespace=xlou top pods 17:04:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:04:12 INFO [loop_until]: OK (rc = 0) 17:04:12 DEBUG --- stdout --- 17:04:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4424Mi am-55f77847b7-ngpns 8m 4438Mi am-55f77847b7-q6zcv 7m 4465Mi ds-cts-0 15m 397Mi ds-cts-1 5m 377Mi ds-cts-2 8m 365Mi ds-idrepo-0 515m 13810Mi ds-idrepo-1 16m 13631Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 527m 3383Mi idm-65858d8c4c-8ff69 354m 3516Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 72m 503Mi 17:04:12 DEBUG --- stderr --- 17:04:12 DEBUG 17:04:12 INFO 17:04:12 INFO [loop_until]: kubectl --namespace=xlou top node 17:04:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:04:12 INFO [loop_until]: OK (rc = 0) 17:04:12 DEBUG --- stdout --- 17:04:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 85m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5383Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5484Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 449m 2% 4805Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 159m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 542m 3% 4347Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1053Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 625m 3% 14333Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14117Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 14158Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 133m 0% 1977Mi 3% 17:04:12 DEBUG --- stderr --- 17:04:12 DEBUG 17:05:12 INFO 17:05:12 INFO [loop_until]: kubectl --namespace=xlou top pods 17:05:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:05:12 INFO [loop_until]: OK (rc = 0) 17:05:12 DEBUG --- stdout --- 17:05:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4424Mi am-55f77847b7-ngpns 10m 4438Mi am-55f77847b7-q6zcv 13m 4465Mi ds-cts-0 7m 397Mi ds-cts-1 9m 377Mi ds-cts-2 8m 365Mi ds-idrepo-0 609m 13821Mi ds-idrepo-1 14m 13632Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 457m 3386Mi idm-65858d8c4c-8ff69 419m 3517Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 54m 503Mi 17:05:12 DEBUG --- stderr --- 17:05:12 DEBUG 17:05:12 INFO 17:05:12 INFO [loop_until]: kubectl --namespace=xlou top node 17:05:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:05:12 INFO [loop_until]: OK (rc = 0) 17:05:12 DEBUG --- stdout --- 17:05:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1256Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5383Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5537Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5485Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 503m 3% 4806Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 165m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 534m 3% 4347Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1052Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 655m 4% 14346Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14127Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14162Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 117m 0% 1974Mi 3% 17:05:12 DEBUG --- stderr --- 17:05:12 DEBUG 17:06:12 INFO 17:06:12 INFO [loop_until]: kubectl --namespace=xlou top pods 17:06:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:06:12 INFO [loop_until]: OK (rc = 0) 17:06:12 DEBUG --- stdout --- 17:06:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4424Mi am-55f77847b7-ngpns 8m 4438Mi am-55f77847b7-q6zcv 7m 4465Mi ds-cts-0 6m 397Mi ds-cts-1 5m 377Mi ds-cts-2 5m 365Mi ds-idrepo-0 564m 13818Mi ds-idrepo-1 14m 13633Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 445m 3386Mi idm-65858d8c4c-8ff69 426m 3518Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 54m 503Mi 17:06:12 DEBUG --- stderr --- 17:06:12 DEBUG 17:06:12 INFO 17:06:12 INFO [loop_until]: kubectl --namespace=xlou top node 17:06:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:06:13 INFO [loop_until]: OK (rc = 0) 17:06:13 DEBUG --- stdout --- 17:06:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1256Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5383Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5532Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5485Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 481m 3% 4804Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 159m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 542m 3% 4347Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1053Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 618m 3% 14346Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14114Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14164Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 122m 0% 1972Mi 3% 17:06:13 DEBUG --- stderr --- 17:06:13 DEBUG 17:07:12 INFO 17:07:12 INFO [loop_until]: kubectl --namespace=xlou top pods 17:07:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:07:12 INFO [loop_until]: OK (rc = 0) 17:07:12 DEBUG --- stdout --- 17:07:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 22m 4421Mi am-55f77847b7-ngpns 11m 4438Mi am-55f77847b7-q6zcv 9m 4466Mi ds-cts-0 6m 398Mi ds-cts-1 5m 378Mi ds-cts-2 6m 365Mi ds-idrepo-0 564m 13810Mi ds-idrepo-1 14m 13634Mi ds-idrepo-2 14m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 483m 3386Mi idm-65858d8c4c-8ff69 468m 3519Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 54m 504Mi 17:07:12 DEBUG --- stderr --- 17:07:12 DEBUG 17:07:13 INFO 17:07:13 INFO [loop_until]: kubectl --namespace=xlou top node 17:07:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:07:13 INFO [loop_until]: OK (rc = 0) 17:07:13 DEBUG --- stdout --- 17:07:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1257Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5379Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5538Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5484Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 531m 3% 4805Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 164m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 544m 3% 4350Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1055Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 638m 4% 14337Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14114Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 14164Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 120m 0% 1977Mi 3% 17:07:13 DEBUG --- stderr --- 17:07:13 DEBUG 17:08:12 INFO 17:08:12 INFO [loop_until]: kubectl --namespace=xlou top pods 17:08:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:08:13 INFO [loop_until]: OK (rc = 0) 17:08:13 DEBUG --- stdout --- 17:08:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 11m 4421Mi am-55f77847b7-ngpns 11m 4441Mi am-55f77847b7-q6zcv 7m 4465Mi ds-cts-0 7m 397Mi ds-cts-1 4m 377Mi ds-cts-2 7m 365Mi ds-idrepo-0 527m 13823Mi ds-idrepo-1 15m 13634Mi ds-idrepo-2 11m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 523m 3381Mi idm-65858d8c4c-8ff69 302m 3521Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 49m 504Mi 17:08:13 DEBUG --- stderr --- 17:08:13 DEBUG 17:08:13 INFO 17:08:13 INFO [loop_until]: kubectl --namespace=xlou top node 17:08:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:08:13 INFO [loop_until]: OK (rc = 0) 17:08:13 DEBUG --- stdout --- 17:08:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5380Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5535Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5490Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 382m 2% 4806Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 165m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 528m 3% 4342Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1050Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 607m 3% 14344Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14114Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 14163Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 119m 0% 1975Mi 3% 17:08:13 DEBUG --- stderr --- 17:08:13 DEBUG 17:09:13 INFO 17:09:13 INFO [loop_until]: kubectl --namespace=xlou top pods 17:09:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:09:13 INFO [loop_until]: OK (rc = 0) 17:09:13 DEBUG --- stdout --- 17:09:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4421Mi am-55f77847b7-ngpns 10m 4441Mi am-55f77847b7-q6zcv 7m 4466Mi ds-cts-0 7m 397Mi ds-cts-1 9m 377Mi ds-cts-2 10m 365Mi ds-idrepo-0 501m 13800Mi ds-idrepo-1 15m 13633Mi ds-idrepo-2 10m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 358m 3382Mi idm-65858d8c4c-8ff69 340m 3521Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 48m 504Mi 17:09:13 DEBUG --- stderr --- 17:09:13 DEBUG 17:09:13 INFO 17:09:13 INFO [loop_until]: kubectl --namespace=xlou top node 17:09:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:09:13 INFO [loop_until]: OK (rc = 0) 17:09:13 DEBUG --- stdout --- 17:09:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1254Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5381Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5492Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 445m 2% 4808Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 161m 1% 2122Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 410m 2% 4340Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1054Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 547m 3% 14325Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14115Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14167Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 111m 0% 1975Mi 3% 17:09:13 DEBUG --- stderr --- 17:09:13 DEBUG 17:10:13 INFO 17:10:13 INFO [loop_until]: kubectl --namespace=xlou top pods 17:10:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:10:13 INFO [loop_until]: OK (rc = 0) 17:10:13 DEBUG --- stdout --- 17:10:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4421Mi am-55f77847b7-ngpns 8m 4441Mi am-55f77847b7-q6zcv 7m 4465Mi ds-cts-0 6m 398Mi ds-cts-1 8m 378Mi ds-cts-2 7m 365Mi ds-idrepo-0 538m 13804Mi ds-idrepo-1 15m 13634Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 391m 3383Mi idm-65858d8c4c-8ff69 384m 3530Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 51m 505Mi 17:10:13 DEBUG --- stderr --- 17:10:13 DEBUG 17:10:13 INFO 17:10:13 INFO [loop_until]: kubectl --namespace=xlou top node 17:10:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:10:13 INFO [loop_until]: OK (rc = 0) 17:10:13 DEBUG --- stdout --- 17:10:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1253Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5379Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5490Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 471m 2% 4817Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 160m 1% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 471m 2% 4344Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1054Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 615m 3% 14328Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14117Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14167Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 114m 0% 1974Mi 3% 17:10:13 DEBUG --- stderr --- 17:10:13 DEBUG 17:11:13 INFO 17:11:13 INFO [loop_until]: kubectl --namespace=xlou top pods 17:11:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:11:13 INFO [loop_until]: OK (rc = 0) 17:11:13 DEBUG --- stdout --- 17:11:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4420Mi am-55f77847b7-ngpns 7m 4439Mi am-55f77847b7-q6zcv 10m 4466Mi ds-cts-0 12m 398Mi ds-cts-1 7m 378Mi ds-cts-2 12m 365Mi ds-idrepo-0 646m 13802Mi ds-idrepo-1 13m 13634Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 597m 3385Mi idm-65858d8c4c-8ff69 397m 3533Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 56m 504Mi 17:11:13 DEBUG --- stderr --- 17:11:13 DEBUG 17:11:13 INFO 17:11:13 INFO [loop_until]: kubectl --namespace=xlou top node 17:11:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:11:13 INFO [loop_until]: OK (rc = 0) 17:11:13 DEBUG --- stdout --- 17:11:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1255Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 5379Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5485Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 445m 2% 4821Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 165m 1% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 720m 4% 4345Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1056Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 725m 4% 14339Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14116Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14165Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 124m 0% 1974Mi 3% 17:11:13 DEBUG --- stderr --- 17:11:13 DEBUG 17:12:13 INFO 17:12:13 INFO [loop_until]: kubectl --namespace=xlou top pods 17:12:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:12:13 INFO [loop_until]: OK (rc = 0) 17:12:13 DEBUG --- stdout --- 17:12:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4419Mi am-55f77847b7-ngpns 7m 4439Mi am-55f77847b7-q6zcv 8m 4466Mi ds-cts-0 8m 398Mi ds-cts-1 11m 374Mi ds-cts-2 9m 365Mi ds-idrepo-0 506m 13823Mi ds-idrepo-1 10m 13633Mi ds-idrepo-2 11m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 423m 3385Mi idm-65858d8c4c-8ff69 421m 3532Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 46m 504Mi 17:12:13 DEBUG --- stderr --- 17:12:13 DEBUG 17:12:13 INFO 17:12:13 INFO [loop_until]: kubectl --namespace=xlou top node 17:12:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:12:13 INFO [loop_until]: OK (rc = 0) 17:12:13 DEBUG --- stdout --- 17:12:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1255Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 5379Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5538Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5488Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 462m 2% 4822Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 167m 1% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 512m 3% 4345Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1056Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1057Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 613m 3% 14338Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14117Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14169Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 118m 0% 1973Mi 3% 17:12:13 DEBUG --- stderr --- 17:12:13 DEBUG 17:13:13 INFO 17:13:13 INFO [loop_until]: kubectl --namespace=xlou top pods 17:13:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:13:13 INFO [loop_until]: OK (rc = 0) 17:13:13 DEBUG --- stdout --- 17:13:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4420Mi am-55f77847b7-ngpns 8m 4439Mi am-55f77847b7-q6zcv 8m 4466Mi ds-cts-0 8m 398Mi ds-cts-1 4m 375Mi ds-cts-2 8m 365Mi ds-idrepo-0 508m 13823Mi ds-idrepo-1 10m 13634Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 495m 3387Mi idm-65858d8c4c-8ff69 283m 3533Mi lodemon-56989b88bb-nm2fw 4m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 48m 504Mi 17:13:13 DEBUG --- stderr --- 17:13:13 DEBUG 17:13:13 INFO 17:13:13 INFO [loop_until]: kubectl --namespace=xlou top node 17:13:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:13:13 INFO [loop_until]: OK (rc = 0) 17:13:13 DEBUG --- stdout --- 17:13:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1257Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5377Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5537Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5488Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 365m 2% 4824Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 159m 1% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 531m 3% 4348Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1054Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 556m 3% 14348Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14115Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14167Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 115m 0% 1974Mi 3% 17:13:13 DEBUG --- stderr --- 17:13:13 DEBUG 17:14:13 INFO 17:14:13 INFO [loop_until]: kubectl --namespace=xlou top pods 17:14:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:14:13 INFO [loop_until]: OK (rc = 0) 17:14:13 DEBUG --- stdout --- 17:14:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 11m 4421Mi am-55f77847b7-ngpns 15m 4440Mi am-55f77847b7-q6zcv 8m 4466Mi ds-cts-0 7m 398Mi ds-cts-1 5m 375Mi ds-cts-2 9m 367Mi ds-idrepo-0 535m 13790Mi ds-idrepo-1 11m 13634Mi ds-idrepo-2 13m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 447m 3388Mi idm-65858d8c4c-8ff69 436m 3534Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 52m 505Mi 17:14:13 DEBUG --- stderr --- 17:14:13 DEBUG 17:14:13 INFO 17:14:13 INFO [loop_until]: kubectl --namespace=xlou top node 17:14:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:14:13 INFO [loop_until]: OK (rc = 0) 17:14:13 DEBUG --- stdout --- 17:14:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5382Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5536Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 74m 0% 5490Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 444m 2% 4825Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 159m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 548m 3% 4354Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1055Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 520m 3% 14315Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14115Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14165Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 121m 0% 1979Mi 3% 17:14:13 DEBUG --- stderr --- 17:14:13 DEBUG 17:15:13 INFO 17:15:13 INFO [loop_until]: kubectl --namespace=xlou top pods 17:15:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:15:13 INFO [loop_until]: OK (rc = 0) 17:15:13 DEBUG --- stdout --- 17:15:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4421Mi am-55f77847b7-ngpns 8m 4439Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 12m 399Mi ds-cts-1 5m 375Mi ds-cts-2 8m 366Mi ds-idrepo-0 528m 13811Mi ds-idrepo-1 11m 13634Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 525m 3389Mi idm-65858d8c4c-8ff69 283m 3536Mi lodemon-56989b88bb-nm2fw 7m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 50m 505Mi 17:15:13 DEBUG --- stderr --- 17:15:13 DEBUG 17:15:13 INFO 17:15:13 INFO [loop_until]: kubectl --namespace=xlou top node 17:15:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:15:13 INFO [loop_until]: OK (rc = 0) 17:15:13 DEBUG --- stdout --- 17:15:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5382Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5488Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 391m 2% 4821Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 161m 1% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 598m 3% 4352Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1055Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 605m 3% 14337Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14117Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14168Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 117m 0% 1978Mi 3% 17:15:13 DEBUG --- stderr --- 17:15:13 DEBUG 17:16:13 INFO 17:16:13 INFO [loop_until]: kubectl --namespace=xlou top pods 17:16:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:16:13 INFO [loop_until]: OK (rc = 0) 17:16:13 DEBUG --- stdout --- 17:16:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4421Mi am-55f77847b7-ngpns 12m 4440Mi am-55f77847b7-q6zcv 7m 4466Mi ds-cts-0 6m 399Mi ds-cts-1 5m 375Mi ds-cts-2 8m 366Mi ds-idrepo-0 547m 13823Mi ds-idrepo-1 12m 13634Mi ds-idrepo-2 10m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 503m 3391Mi idm-65858d8c4c-8ff69 338m 3536Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 51m 505Mi 17:16:13 DEBUG --- stderr --- 17:16:13 DEBUG 17:16:14 INFO 17:16:14 INFO [loop_until]: kubectl --namespace=xlou top node 17:16:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:16:14 INFO [loop_until]: OK (rc = 0) 17:16:14 DEBUG --- stdout --- 17:16:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5380Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5487Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 426m 2% 4818Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 161m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 560m 3% 4353Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1055Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 603m 3% 14328Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14118Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14170Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 116m 0% 1979Mi 3% 17:16:14 DEBUG --- stderr --- 17:16:14 DEBUG 17:17:14 INFO 17:17:14 INFO [loop_until]: kubectl --namespace=xlou top pods 17:17:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:17:14 INFO [loop_until]: OK (rc = 0) 17:17:14 DEBUG --- stdout --- 17:17:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 10m 4422Mi am-55f77847b7-ngpns 9m 4440Mi am-55f77847b7-q6zcv 8m 4467Mi ds-cts-0 6m 399Mi ds-cts-1 4m 375Mi ds-cts-2 7m 366Mi ds-idrepo-0 484m 13827Mi ds-idrepo-1 11m 13634Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 502m 3390Mi idm-65858d8c4c-8ff69 275m 3537Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 43m 504Mi 17:17:14 DEBUG --- stderr --- 17:17:14 DEBUG 17:17:14 INFO 17:17:14 INFO [loop_until]: kubectl --namespace=xlou top node 17:17:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:17:14 INFO [loop_until]: OK (rc = 0) 17:17:14 DEBUG --- stdout --- 17:17:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5382Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5538Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5489Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 299m 1% 4819Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 150m 0% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 439m 2% 4351Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1056Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 471m 2% 14353Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14116Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14170Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 115m 0% 1977Mi 3% 17:17:14 DEBUG --- stderr --- 17:17:14 DEBUG 17:18:14 INFO 17:18:14 INFO [loop_until]: kubectl --namespace=xlou top pods 17:18:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:18:14 INFO [loop_until]: OK (rc = 0) 17:18:14 DEBUG --- stdout --- 17:18:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4423Mi am-55f77847b7-ngpns 9m 4447Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 6m 399Mi ds-cts-1 5m 375Mi ds-cts-2 13m 366Mi ds-idrepo-0 11m 13826Mi ds-idrepo-1 11m 13634Mi ds-idrepo-2 10m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 8m 3390Mi idm-65858d8c4c-8ff69 6m 3537Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 14m 101Mi 17:18:14 DEBUG --- stderr --- 17:18:14 DEBUG 17:18:14 INFO 17:18:14 INFO [loop_until]: kubectl --namespace=xlou top node 17:18:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:18:14 INFO [loop_until]: OK (rc = 0) 17:18:14 DEBUG --- stdout --- 17:18:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5382Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5541Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5508Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4822Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 4352Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1052Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14352Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14117Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14170Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 73m 0% 1579Mi 2% 17:18:14 DEBUG --- stderr --- 17:18:14 DEBUG 127.0.0.1 - - [11/Aug/2023 17:18:57] "GET /monitoring/average?start_time=23-08-11_15:48:26&stop_time=23-08-11_16:16:57 HTTP/1.1" 200 - 17:19:14 INFO 17:19:14 INFO [loop_until]: kubectl --namespace=xlou top pods 17:19:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:19:14 INFO [loop_until]: OK (rc = 0) 17:19:14 DEBUG --- stdout --- 17:19:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4423Mi am-55f77847b7-ngpns 8m 4447Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 7m 399Mi ds-cts-1 4m 376Mi ds-cts-2 8m 366Mi ds-idrepo-0 11m 13826Mi ds-idrepo-1 11m 13634Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 7m 3390Mi idm-65858d8c4c-8ff69 6m 3537Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 101Mi 17:19:14 DEBUG --- stderr --- 17:19:14 DEBUG 17:19:14 INFO 17:19:14 INFO [loop_until]: kubectl --namespace=xlou top node 17:19:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:19:14 INFO [loop_until]: OK (rc = 0) 17:19:14 DEBUG --- stdout --- 17:19:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5384Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5538Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5497Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4820Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 4352Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1056Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14352Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14117Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14173Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1582Mi 2% 17:19:14 DEBUG --- stderr --- 17:19:14 DEBUG 17:20:14 INFO 17:20:14 INFO [loop_until]: kubectl --namespace=xlou top pods 17:20:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:20:14 INFO [loop_until]: OK (rc = 0) 17:20:14 DEBUG --- stdout --- 17:20:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 54m 4457Mi am-55f77847b7-ngpns 8m 4447Mi am-55f77847b7-q6zcv 17m 4467Mi ds-cts-0 9m 400Mi ds-cts-1 6m 375Mi ds-cts-2 9m 366Mi ds-idrepo-0 542m 13820Mi ds-idrepo-1 11m 13634Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 438m 3398Mi idm-65858d8c4c-8ff69 389m 3537Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 348m 482Mi 17:20:14 DEBUG --- stderr --- 17:20:14 DEBUG 17:20:14 INFO 17:20:14 INFO [loop_until]: kubectl --namespace=xlou top node 17:20:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:20:14 INFO [loop_until]: OK (rc = 0) 17:20:14 DEBUG --- stdout --- 17:20:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 106m 0% 5406Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 75m 0% 5541Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5493Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 660m 4% 4827Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 175m 1% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 634m 3% 4359Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1055Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 892m 5% 14351Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14117Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14173Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 364m 2% 2002Mi 3% 17:20:14 DEBUG --- stderr --- 17:20:14 DEBUG 17:21:14 INFO 17:21:14 INFO [loop_until]: kubectl --namespace=xlou top pods 17:21:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:21:14 INFO [loop_until]: OK (rc = 0) 17:21:14 DEBUG --- stdout --- 17:21:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4449Mi am-55f77847b7-ngpns 8m 4447Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 7m 401Mi ds-cts-1 5m 375Mi ds-cts-2 7m 366Mi ds-idrepo-0 1029m 13822Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 757m 3401Mi idm-65858d8c4c-8ff69 703m 3535Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 138m 497Mi 17:21:14 DEBUG --- stderr --- 17:21:14 DEBUG 17:21:14 INFO 17:21:14 INFO [loop_until]: kubectl --namespace=xlou top node 17:21:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:21:14 INFO [loop_until]: OK (rc = 0) 17:21:14 DEBUG --- stdout --- 17:21:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5407Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5498Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 739m 4% 4823Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 191m 1% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 823m 5% 4361Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1057Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1020m 6% 14335Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14114Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14176Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 218m 1% 2014Mi 3% 17:21:14 DEBUG --- stderr --- 17:21:14 DEBUG 17:22:14 INFO 17:22:14 INFO [loop_until]: kubectl --namespace=xlou top pods 17:22:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:22:14 INFO [loop_until]: OK (rc = 0) 17:22:14 DEBUG --- stdout --- 17:22:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 11m 4449Mi am-55f77847b7-ngpns 11m 4447Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 7m 400Mi ds-cts-1 7m 375Mi ds-cts-2 7m 367Mi ds-idrepo-0 1093m 13807Mi ds-idrepo-1 14m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 938m 3402Mi idm-65858d8c4c-8ff69 744m 3541Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 132m 496Mi 17:22:14 DEBUG --- stderr --- 17:22:14 DEBUG 17:22:14 INFO 17:22:14 INFO [loop_until]: kubectl --namespace=xlou top node 17:22:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:22:14 INFO [loop_until]: OK (rc = 0) 17:22:14 DEBUG --- stdout --- 17:22:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1256Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5405Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5495Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 816m 5% 4826Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 203m 1% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1040m 6% 4366Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1055Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1223m 7% 14340Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14120Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14175Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 203m 1% 2011Mi 3% 17:22:14 DEBUG --- stderr --- 17:22:14 DEBUG 17:23:14 INFO 17:23:14 INFO [loop_until]: kubectl --namespace=xlou top pods 17:23:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:23:14 INFO [loop_until]: OK (rc = 0) 17:23:14 DEBUG --- stdout --- 17:23:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4449Mi am-55f77847b7-ngpns 8m 4447Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 7m 400Mi ds-cts-1 6m 375Mi ds-cts-2 15m 366Mi ds-idrepo-0 1052m 13822Mi ds-idrepo-1 10m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 964m 3400Mi idm-65858d8c4c-8ff69 717m 3542Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 129m 499Mi 17:23:14 DEBUG --- stderr --- 17:23:14 DEBUG 17:23:14 INFO 17:23:14 INFO [loop_until]: kubectl --namespace=xlou top node 17:23:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:23:14 INFO [loop_until]: OK (rc = 0) 17:23:14 DEBUG --- stdout --- 17:23:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5419Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5496Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 908m 5% 4829Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 196m 1% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1010m 6% 4361Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1052Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1191m 7% 14335Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14120Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14173Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 194m 1% 2015Mi 3% 17:23:14 DEBUG --- stderr --- 17:23:14 DEBUG 17:24:14 INFO 17:24:14 INFO [loop_until]: kubectl --namespace=xlou top pods 17:24:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:24:14 INFO [loop_until]: OK (rc = 0) 17:24:14 DEBUG --- stdout --- 17:24:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4449Mi am-55f77847b7-ngpns 9m 4447Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 7m 400Mi ds-cts-1 6m 375Mi ds-cts-2 7m 367Mi ds-idrepo-0 1187m 13822Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 945m 3400Mi idm-65858d8c4c-8ff69 765m 3547Mi lodemon-56989b88bb-nm2fw 5m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 123m 500Mi 17:24:14 DEBUG --- stderr --- 17:24:14 DEBUG 17:24:14 INFO 17:24:14 INFO [loop_until]: kubectl --namespace=xlou top node 17:24:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:24:15 INFO [loop_until]: OK (rc = 0) 17:24:15 DEBUG --- stdout --- 17:24:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5406Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5537Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5502Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 896m 5% 4833Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 202m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1101m 6% 4368Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1055Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1296m 8% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14121Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14175Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 194m 1% 2017Mi 3% 17:24:15 DEBUG --- stderr --- 17:24:15 DEBUG 17:25:14 INFO 17:25:14 INFO [loop_until]: kubectl --namespace=xlou top pods 17:25:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:25:14 INFO [loop_until]: OK (rc = 0) 17:25:14 DEBUG --- stdout --- 17:25:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 11m 4449Mi am-55f77847b7-ngpns 9m 4447Mi am-55f77847b7-q6zcv 10m 4467Mi ds-cts-0 7m 401Mi ds-cts-1 5m 375Mi ds-cts-2 8m 366Mi ds-idrepo-0 1230m 13808Mi ds-idrepo-1 13m 13635Mi ds-idrepo-2 12m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1007m 3396Mi idm-65858d8c4c-8ff69 727m 3540Mi lodemon-56989b88bb-nm2fw 5m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 127m 501Mi 17:25:14 DEBUG --- stderr --- 17:25:14 DEBUG 17:25:15 INFO 17:25:15 INFO [loop_until]: kubectl --namespace=xlou top node 17:25:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:25:15 INFO [loop_until]: OK (rc = 0) 17:25:15 DEBUG --- stdout --- 17:25:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5406Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5495Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 793m 4% 4824Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 199m 1% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1013m 6% 4361Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1221m 7% 14341Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14117Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14174Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 172m 1% 2018Mi 3% 17:25:15 DEBUG --- stderr --- 17:25:15 DEBUG 17:26:14 INFO 17:26:14 INFO [loop_until]: kubectl --namespace=xlou top pods 17:26:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:26:14 INFO [loop_until]: OK (rc = 0) 17:26:14 DEBUG --- stdout --- 17:26:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4449Mi am-55f77847b7-ngpns 11m 4447Mi am-55f77847b7-q6zcv 8m 4467Mi ds-cts-0 7m 401Mi ds-cts-1 5m 375Mi ds-cts-2 7m 366Mi ds-idrepo-0 1088m 13827Mi ds-idrepo-1 13m 13635Mi ds-idrepo-2 11m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 698m 3397Mi idm-65858d8c4c-8ff69 770m 3541Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 110m 509Mi 17:26:14 DEBUG --- stderr --- 17:26:14 DEBUG 17:26:15 INFO 17:26:15 INFO [loop_until]: kubectl --namespace=xlou top node 17:26:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:26:15 INFO [loop_until]: OK (rc = 0) 17:26:15 DEBUG --- stdout --- 17:26:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5408Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5538Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5498Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 833m 5% 4825Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 209m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 760m 4% 4360Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1052Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1107m 6% 14357Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14118Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14176Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 179m 1% 2025Mi 3% 17:26:15 DEBUG --- stderr --- 17:26:15 DEBUG 17:27:15 INFO 17:27:15 INFO [loop_until]: kubectl --namespace=xlou top pods 17:27:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:27:15 INFO [loop_until]: OK (rc = 0) 17:27:15 DEBUG --- stdout --- 17:27:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4449Mi am-55f77847b7-ngpns 9m 4447Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 8m 401Mi ds-cts-1 8m 377Mi ds-cts-2 8m 366Mi ds-idrepo-0 1007m 13799Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 11m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 908m 3398Mi idm-65858d8c4c-8ff69 733m 3548Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 143m 507Mi 17:27:15 DEBUG --- stderr --- 17:27:15 DEBUG 17:27:15 INFO 17:27:15 INFO [loop_until]: kubectl --namespace=xlou top node 17:27:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:27:15 INFO [loop_until]: OK (rc = 0) 17:27:15 DEBUG --- stdout --- 17:27:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1257Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5405Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5500Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 760m 4% 4834Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 191m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 958m 6% 4361Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1057Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1102m 6% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14121Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14174Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 221m 1% 2024Mi 3% 17:27:15 DEBUG --- stderr --- 17:27:15 DEBUG 17:28:15 INFO 17:28:15 INFO [loop_until]: kubectl --namespace=xlou top pods 17:28:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:28:15 INFO [loop_until]: OK (rc = 0) 17:28:15 DEBUG --- stdout --- 17:28:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4449Mi am-55f77847b7-ngpns 9m 4448Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 6m 401Mi ds-cts-1 5m 375Mi ds-cts-2 8m 367Mi ds-idrepo-0 876m 13801Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 11m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 836m 3399Mi idm-65858d8c4c-8ff69 666m 3551Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 92m 507Mi 17:28:15 DEBUG --- stderr --- 17:28:15 DEBUG 17:28:15 INFO 17:28:15 INFO [loop_until]: kubectl --namespace=xlou top node 17:28:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:28:15 INFO [loop_until]: OK (rc = 0) 17:28:15 DEBUG --- stdout --- 17:28:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1255Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5407Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5536Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5499Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 693m 4% 4835Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 198m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 903m 5% 4361Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1057Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1085m 6% 14324Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14121Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14175Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 170m 1% 2023Mi 3% 17:28:15 DEBUG --- stderr --- 17:28:15 DEBUG 17:29:15 INFO 17:29:15 INFO [loop_until]: kubectl --namespace=xlou top pods 17:29:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:29:15 INFO [loop_until]: OK (rc = 0) 17:29:15 DEBUG --- stdout --- 17:29:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4449Mi am-55f77847b7-ngpns 8m 4448Mi am-55f77847b7-q6zcv 8m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 5m 376Mi ds-cts-2 9m 367Mi ds-idrepo-0 1149m 13796Mi ds-idrepo-1 12m 13634Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 964m 3404Mi idm-65858d8c4c-8ff69 801m 3557Mi lodemon-56989b88bb-nm2fw 5m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 107m 507Mi 17:29:15 DEBUG --- stderr --- 17:29:15 DEBUG 17:29:15 INFO 17:29:15 INFO [loop_until]: kubectl --namespace=xlou top node 17:29:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:29:15 INFO [loop_until]: OK (rc = 0) 17:29:15 DEBUG --- stdout --- 17:29:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5408Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5497Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 779m 4% 4844Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 195m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1059m 6% 4364Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1159m 7% 14329Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14122Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14175Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 173m 1% 2025Mi 3% 17:29:15 DEBUG --- stderr --- 17:29:15 DEBUG 17:30:15 INFO 17:30:15 INFO [loop_until]: kubectl --namespace=xlou top pods 17:30:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:30:15 INFO [loop_until]: OK (rc = 0) 17:30:15 DEBUG --- stdout --- 17:30:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4449Mi am-55f77847b7-ngpns 8m 4448Mi am-55f77847b7-q6zcv 9m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 5m 376Mi ds-cts-2 7m 367Mi ds-idrepo-0 987m 13794Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 11m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 862m 3405Mi idm-65858d8c4c-8ff69 637m 3553Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 97m 507Mi 17:30:15 DEBUG --- stderr --- 17:30:15 DEBUG 17:30:15 INFO 17:30:15 INFO [loop_until]: kubectl --namespace=xlou top node 17:30:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:30:15 INFO [loop_until]: OK (rc = 0) 17:30:15 DEBUG --- stdout --- 17:30:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5406Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5499Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 745m 4% 4841Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 191m 1% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 849m 5% 4366Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1056Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1010m 6% 14357Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14121Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14177Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 169m 1% 2023Mi 3% 17:30:15 DEBUG --- stderr --- 17:30:15 DEBUG 17:31:15 INFO 17:31:15 INFO [loop_until]: kubectl --namespace=xlou top pods 17:31:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:31:15 INFO [loop_until]: OK (rc = 0) 17:31:15 DEBUG --- stdout --- 17:31:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4449Mi am-55f77847b7-ngpns 8m 4448Mi am-55f77847b7-q6zcv 8m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 5m 376Mi ds-cts-2 8m 366Mi ds-idrepo-0 1032m 13797Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 13m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 929m 3409Mi idm-65858d8c4c-8ff69 637m 3555Mi lodemon-56989b88bb-nm2fw 1m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 146m 508Mi 17:31:15 DEBUG --- stderr --- 17:31:15 DEBUG 17:31:15 INFO 17:31:15 INFO [loop_until]: kubectl --namespace=xlou top node 17:31:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:31:15 INFO [loop_until]: OK (rc = 0) 17:31:15 DEBUG --- stdout --- 17:31:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5408Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5538Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5497Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 689m 4% 4842Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 195m 1% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 980m 6% 4370Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1053Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1083m 6% 14356Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 14122Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14173Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 215m 1% 2026Mi 3% 17:31:15 DEBUG --- stderr --- 17:31:15 DEBUG 17:32:15 INFO 17:32:15 INFO [loop_until]: kubectl --namespace=xlou top pods 17:32:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:32:15 INFO [loop_until]: OK (rc = 0) 17:32:15 DEBUG --- stdout --- 17:32:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4449Mi am-55f77847b7-ngpns 8m 4448Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 5m 376Mi ds-cts-2 7m 366Mi ds-idrepo-0 1057m 13800Mi ds-idrepo-1 13m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 929m 3411Mi idm-65858d8c4c-8ff69 744m 3556Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 98m 508Mi 17:32:15 DEBUG --- stderr --- 17:32:15 DEBUG 17:32:16 INFO 17:32:16 INFO [loop_until]: kubectl --namespace=xlou top node 17:32:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:32:16 INFO [loop_until]: OK (rc = 0) 17:32:16 DEBUG --- stdout --- 17:32:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5408Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5497Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 837m 5% 4844Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 203m 1% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1012m 6% 4372Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1055Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 51m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1235m 7% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14120Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14175Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 169m 1% 2027Mi 3% 17:32:16 DEBUG --- stderr --- 17:32:16 DEBUG 17:33:15 INFO 17:33:15 INFO [loop_until]: kubectl --namespace=xlou top pods 17:33:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:33:15 INFO [loop_until]: OK (rc = 0) 17:33:15 DEBUG --- stdout --- 17:33:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4449Mi am-55f77847b7-ngpns 8m 4448Mi am-55f77847b7-q6zcv 8m 4467Mi ds-cts-0 7m 408Mi ds-cts-1 4m 376Mi ds-cts-2 14m 371Mi ds-idrepo-0 1043m 13806Mi ds-idrepo-1 12m 13634Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1017m 3412Mi idm-65858d8c4c-8ff69 736m 3560Mi lodemon-56989b88bb-nm2fw 1m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 106m 508Mi 17:33:15 DEBUG --- stderr --- 17:33:15 DEBUG 17:33:16 INFO 17:33:16 INFO [loop_until]: kubectl --namespace=xlou top node 17:33:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:33:16 INFO [loop_until]: OK (rc = 0) 17:33:16 DEBUG --- stdout --- 17:33:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5411Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 5538Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5497Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 803m 5% 4848Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 204m 1% 2127Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1125m 7% 4373Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 67m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1180m 7% 14337Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14123Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14175Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 165m 1% 2024Mi 3% 17:33:16 DEBUG --- stderr --- 17:33:16 DEBUG 17:34:15 INFO 17:34:15 INFO [loop_until]: kubectl --namespace=xlou top pods 17:34:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:34:15 INFO [loop_until]: OK (rc = 0) 17:34:15 DEBUG --- stdout --- 17:34:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4449Mi am-55f77847b7-ngpns 8m 4448Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 5m 376Mi ds-cts-2 7m 370Mi ds-idrepo-0 1051m 13798Mi ds-idrepo-1 16m 13634Mi ds-idrepo-2 12m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 903m 3412Mi idm-65858d8c4c-8ff69 767m 3569Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 93m 508Mi 17:34:15 DEBUG --- stderr --- 17:34:15 DEBUG 17:34:16 INFO 17:34:16 INFO [loop_until]: kubectl --namespace=xlou top node 17:34:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:34:16 INFO [loop_until]: OK (rc = 0) 17:34:16 DEBUG --- stdout --- 17:34:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1256Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5409Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5541Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5499Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 819m 5% 4858Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 192m 1% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 915m 5% 4375Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1056m 6% 14327Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14121Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 14171Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 167m 1% 2026Mi 3% 17:34:16 DEBUG --- stderr --- 17:34:16 DEBUG 17:35:15 INFO 17:35:15 INFO [loop_until]: kubectl --namespace=xlou top pods 17:35:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:35:15 INFO [loop_until]: OK (rc = 0) 17:35:15 DEBUG --- stdout --- 17:35:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4450Mi am-55f77847b7-ngpns 8m 4448Mi am-55f77847b7-q6zcv 9m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 5m 376Mi ds-cts-2 7m 370Mi ds-idrepo-0 1005m 13825Mi ds-idrepo-1 19m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 852m 3416Mi idm-65858d8c4c-8ff69 673m 3560Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 92m 508Mi 17:35:15 DEBUG --- stderr --- 17:35:15 DEBUG 17:35:16 INFO 17:35:16 INFO [loop_until]: kubectl --namespace=xlou top node 17:35:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:35:16 INFO [loop_until]: OK (rc = 0) 17:35:16 DEBUG --- stdout --- 17:35:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5409Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5537Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5498Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 755m 4% 4849Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 198m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 925m 5% 4379Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1055m 6% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14122Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 14174Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 168m 1% 2039Mi 3% 17:35:16 DEBUG --- stderr --- 17:35:16 DEBUG 17:36:16 INFO 17:36:16 INFO [loop_until]: kubectl --namespace=xlou top pods 17:36:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:36:16 INFO [loop_until]: OK (rc = 0) 17:36:16 DEBUG --- stdout --- 17:36:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4450Mi am-55f77847b7-ngpns 8m 4448Mi am-55f77847b7-q6zcv 8m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 6m 376Mi ds-cts-2 7m 371Mi ds-idrepo-0 1017m 13822Mi ds-idrepo-1 16m 13635Mi ds-idrepo-2 13m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 905m 3418Mi idm-65858d8c4c-8ff69 661m 3566Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 93m 508Mi 17:36:16 DEBUG --- stderr --- 17:36:16 DEBUG 17:36:16 INFO 17:36:16 INFO [loop_until]: kubectl --namespace=xlou top node 17:36:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:36:16 INFO [loop_until]: OK (rc = 0) 17:36:16 DEBUG --- stdout --- 17:36:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5409Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5536Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 5499Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 694m 4% 4851Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 196m 1% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1007m 6% 4378Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1057Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1031m 6% 14337Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14122Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 71m 0% 14188Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 161m 1% 2028Mi 3% 17:36:16 DEBUG --- stderr --- 17:36:16 DEBUG 17:37:16 INFO 17:37:16 INFO [loop_until]: kubectl --namespace=xlou top pods 17:37:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:37:16 INFO [loop_until]: OK (rc = 0) 17:37:16 DEBUG --- stdout --- 17:37:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4450Mi am-55f77847b7-ngpns 8m 4448Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 10m 376Mi ds-cts-2 8m 370Mi ds-idrepo-0 942m 13800Mi ds-idrepo-1 12m 13634Mi ds-idrepo-2 15m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 937m 3419Mi idm-65858d8c4c-8ff69 539m 3566Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 92m 509Mi 17:37:16 DEBUG --- stderr --- 17:37:16 DEBUG 17:37:16 INFO 17:37:16 INFO [loop_until]: kubectl --namespace=xlou top node 17:37:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:37:16 INFO [loop_until]: OK (rc = 0) 17:37:16 DEBUG --- stdout --- 17:37:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5404Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5501Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 693m 4% 4849Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 192m 1% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1004m 6% 4380Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1070m 6% 14333Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14124Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14175Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 162m 1% 2028Mi 3% 17:37:16 DEBUG --- stderr --- 17:37:16 DEBUG 17:38:16 INFO 17:38:16 INFO [loop_until]: kubectl --namespace=xlou top pods 17:38:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:38:16 INFO [loop_until]: OK (rc = 0) 17:38:16 DEBUG --- stdout --- 17:38:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4450Mi am-55f77847b7-ngpns 8m 4449Mi am-55f77847b7-q6zcv 8m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 6m 376Mi ds-cts-2 10m 371Mi ds-idrepo-0 980m 13805Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 14m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1010m 3420Mi idm-65858d8c4c-8ff69 844m 3570Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 113m 509Mi 17:38:16 DEBUG --- stderr --- 17:38:16 DEBUG 17:38:16 INFO 17:38:16 INFO [loop_until]: kubectl --namespace=xlou top node 17:38:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:38:16 INFO [loop_until]: OK (rc = 0) 17:38:16 DEBUG --- stdout --- 17:38:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5408Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5535Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5502Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 830m 5% 4864Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 197m 1% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1075m 6% 4389Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1091m 6% 14344Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14121Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14176Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 189m 1% 2024Mi 3% 17:38:16 DEBUG --- stderr --- 17:38:16 DEBUG 17:39:16 INFO 17:39:16 INFO [loop_until]: kubectl --namespace=xlou top pods 17:39:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:39:16 INFO [loop_until]: OK (rc = 0) 17:39:16 DEBUG --- stdout --- 17:39:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4450Mi am-55f77847b7-ngpns 8m 4448Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 5m 376Mi ds-cts-2 7m 371Mi ds-idrepo-0 1091m 13823Mi ds-idrepo-1 12m 13634Mi ds-idrepo-2 11m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 931m 3422Mi idm-65858d8c4c-8ff69 717m 3570Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 94m 509Mi 17:39:16 DEBUG --- stderr --- 17:39:16 DEBUG 17:39:16 INFO 17:39:16 INFO [loop_until]: kubectl --namespace=xlou top node 17:39:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:39:16 INFO [loop_until]: OK (rc = 0) 17:39:16 DEBUG --- stdout --- 17:39:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5407Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5501Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 811m 5% 4857Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 196m 1% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 980m 6% 4381Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1115m 7% 14333Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14121Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14175Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 166m 1% 2023Mi 3% 17:39:16 DEBUG --- stderr --- 17:39:16 DEBUG 17:40:16 INFO 17:40:16 INFO [loop_until]: kubectl --namespace=xlou top pods 17:40:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:40:16 INFO [loop_until]: OK (rc = 0) 17:40:16 DEBUG --- stdout --- 17:40:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4450Mi am-55f77847b7-ngpns 10m 4449Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 7m 408Mi ds-cts-1 5m 376Mi ds-cts-2 7m 371Mi ds-idrepo-0 944m 13822Mi ds-idrepo-1 14m 13634Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 920m 3423Mi idm-65858d8c4c-8ff69 631m 3571Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 92m 509Mi 17:40:16 DEBUG --- stderr --- 17:40:16 DEBUG 17:40:17 INFO 17:40:17 INFO [loop_until]: kubectl --namespace=xlou top node 17:40:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:40:17 INFO [loop_until]: OK (rc = 0) 17:40:17 DEBUG --- stdout --- 17:40:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5407Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5501Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 687m 4% 4857Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 190m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 928m 5% 4380Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1012m 6% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14118Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14176Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 151m 0% 2023Mi 3% 17:40:17 DEBUG --- stderr --- 17:40:17 DEBUG 17:41:16 INFO 17:41:16 INFO [loop_until]: kubectl --namespace=xlou top pods 17:41:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:41:16 INFO [loop_until]: OK (rc = 0) 17:41:16 DEBUG --- stdout --- 17:41:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4450Mi am-55f77847b7-ngpns 8m 4449Mi am-55f77847b7-q6zcv 8m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 6m 376Mi ds-cts-2 7m 371Mi ds-idrepo-0 1006m 13806Mi ds-idrepo-1 12m 13634Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1011m 3419Mi idm-65858d8c4c-8ff69 728m 3573Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 95m 510Mi 17:41:16 DEBUG --- stderr --- 17:41:16 DEBUG 17:41:17 INFO 17:41:17 INFO [loop_until]: kubectl --namespace=xlou top node 17:41:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:41:17 INFO [loop_until]: OK (rc = 0) 17:41:17 DEBUG --- stdout --- 17:41:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 5407Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5538Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5499Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 775m 4% 4861Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 192m 1% 2136Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 983m 6% 4378Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1076m 6% 14336Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14123Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14178Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 164m 1% 2027Mi 3% 17:41:17 DEBUG --- stderr --- 17:41:17 DEBUG 17:42:16 INFO 17:42:16 INFO [loop_until]: kubectl --namespace=xlou top pods 17:42:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:42:16 INFO [loop_until]: OK (rc = 0) 17:42:16 DEBUG --- stdout --- 17:42:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4450Mi am-55f77847b7-ngpns 9m 4449Mi am-55f77847b7-q6zcv 9m 4467Mi ds-cts-0 8m 408Mi ds-cts-1 7m 376Mi ds-cts-2 8m 371Mi ds-idrepo-0 996m 13798Mi ds-idrepo-1 12m 13634Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 875m 3420Mi idm-65858d8c4c-8ff69 593m 3573Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 89m 509Mi 17:42:16 DEBUG --- stderr --- 17:42:16 DEBUG 17:42:17 INFO 17:42:17 INFO [loop_until]: kubectl --namespace=xlou top node 17:42:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:42:17 INFO [loop_until]: OK (rc = 0) 17:42:17 DEBUG --- stdout --- 17:42:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5408Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5495Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 641m 4% 4857Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 200m 1% 2136Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 993m 6% 4380Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1028m 6% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14124Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14177Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 164m 1% 2025Mi 3% 17:42:17 DEBUG --- stderr --- 17:42:17 DEBUG 17:43:16 INFO 17:43:16 INFO [loop_until]: kubectl --namespace=xlou top pods 17:43:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:43:16 INFO [loop_until]: OK (rc = 0) 17:43:16 DEBUG --- stdout --- 17:43:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4450Mi am-55f77847b7-ngpns 9m 4449Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 7m 376Mi ds-cts-2 9m 372Mi ds-idrepo-0 1124m 13824Mi ds-idrepo-1 13m 13634Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 962m 3423Mi idm-65858d8c4c-8ff69 848m 3574Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 101m 510Mi 17:43:16 DEBUG --- stderr --- 17:43:16 DEBUG 17:43:17 INFO 17:43:17 INFO [loop_until]: kubectl --namespace=xlou top node 17:43:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:43:17 INFO [loop_until]: OK (rc = 0) 17:43:17 DEBUG --- stdout --- 17:43:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5409Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5538Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5498Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 861m 5% 4862Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 191m 1% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1037m 6% 4382Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1170m 7% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14125Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14181Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 170m 1% 2024Mi 3% 17:43:17 DEBUG --- stderr --- 17:43:17 DEBUG 17:44:16 INFO 17:44:16 INFO [loop_until]: kubectl --namespace=xlou top pods 17:44:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:44:16 INFO [loop_until]: OK (rc = 0) 17:44:16 DEBUG --- stdout --- 17:44:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4450Mi am-55f77847b7-ngpns 10m 4449Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 7m 376Mi ds-cts-2 8m 371Mi ds-idrepo-0 997m 13804Mi ds-idrepo-1 18m 13634Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 912m 3429Mi idm-65858d8c4c-8ff69 762m 3576Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 103m 510Mi 17:44:16 DEBUG --- stderr --- 17:44:16 DEBUG 17:44:17 INFO 17:44:17 INFO [loop_until]: kubectl --namespace=xlou top node 17:44:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:44:17 INFO [loop_until]: OK (rc = 0) 17:44:17 DEBUG --- stdout --- 17:44:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1255Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5410Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5543Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5496Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 767m 4% 4858Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 197m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 967m 6% 4388Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1051m 6% 14342Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14124Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 73m 0% 14178Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 207m 1% 2027Mi 3% 17:44:17 DEBUG --- stderr --- 17:44:17 DEBUG 17:45:17 INFO 17:45:17 INFO [loop_until]: kubectl --namespace=xlou top pods 17:45:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:45:17 INFO [loop_until]: OK (rc = 0) 17:45:17 DEBUG --- stdout --- 17:45:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 13m 4450Mi am-55f77847b7-ngpns 9m 4449Mi am-55f77847b7-q6zcv 9m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 17m 376Mi ds-cts-2 9m 371Mi ds-idrepo-0 1042m 13799Mi ds-idrepo-1 13m 13634Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 876m 3431Mi idm-65858d8c4c-8ff69 803m 3577Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 148m 511Mi 17:45:17 DEBUG --- stderr --- 17:45:17 DEBUG 17:45:17 INFO 17:45:17 INFO [loop_until]: kubectl --namespace=xlou top node 17:45:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:45:17 INFO [loop_until]: OK (rc = 0) 17:45:17 DEBUG --- stdout --- 17:45:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5411Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5542Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5499Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 843m 5% 4862Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 192m 1% 2136Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 909m 5% 4390Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 74m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1112m 6% 14335Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14123Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14180Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 217m 1% 2030Mi 3% 17:45:17 DEBUG --- stderr --- 17:45:17 DEBUG 17:46:17 INFO 17:46:17 INFO [loop_until]: kubectl --namespace=xlou top pods 17:46:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:46:17 INFO [loop_until]: OK (rc = 0) 17:46:17 DEBUG --- stdout --- 17:46:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4450Mi am-55f77847b7-ngpns 7m 4449Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 21m 408Mi ds-cts-1 7m 377Mi ds-cts-2 7m 371Mi ds-idrepo-0 1015m 13802Mi ds-idrepo-1 13m 13634Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 976m 3432Mi idm-65858d8c4c-8ff69 720m 3573Mi lodemon-56989b88bb-nm2fw 1m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 99m 511Mi 17:46:17 DEBUG --- stderr --- 17:46:17 DEBUG 17:46:17 INFO 17:46:17 INFO [loop_until]: kubectl --namespace=xlou top node 17:46:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:46:17 INFO [loop_until]: OK (rc = 0) 17:46:17 DEBUG --- stdout --- 17:46:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5410Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5500Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 777m 4% 4857Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 207m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1045m 6% 4393Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 69m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1125m 7% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 14136Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14178Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 168m 1% 2026Mi 3% 17:46:17 DEBUG --- stderr --- 17:46:17 DEBUG 17:47:17 INFO 17:47:17 INFO [loop_until]: kubectl --namespace=xlou top pods 17:47:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:47:17 INFO [loop_until]: OK (rc = 0) 17:47:17 DEBUG --- stdout --- 17:47:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4451Mi am-55f77847b7-ngpns 9m 4449Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 6m 408Mi ds-cts-1 7m 377Mi ds-cts-2 7m 371Mi ds-idrepo-0 912m 13805Mi ds-idrepo-1 13m 13634Mi ds-idrepo-2 10m 13567Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 859m 3434Mi idm-65858d8c4c-8ff69 529m 3574Mi lodemon-56989b88bb-nm2fw 7m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 87m 511Mi 17:47:17 DEBUG --- stderr --- 17:47:17 DEBUG 17:47:17 INFO 17:47:17 INFO [loop_until]: kubectl --namespace=xlou top node 17:47:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:47:17 INFO [loop_until]: OK (rc = 0) 17:47:17 DEBUG --- stdout --- 17:47:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5412Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5542Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5501Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 564m 3% 4859Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 190m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 933m 5% 4394Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 903m 5% 14344Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14125Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14176Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 153m 0% 2025Mi 3% 17:47:17 DEBUG --- stderr --- 17:47:17 DEBUG 17:48:17 INFO 17:48:17 INFO [loop_until]: kubectl --namespace=xlou top pods 17:48:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:48:17 INFO [loop_until]: OK (rc = 0) 17:48:17 DEBUG --- stdout --- 17:48:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 10m 4451Mi am-55f77847b7-ngpns 9m 4449Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 7m 408Mi ds-cts-1 7m 377Mi ds-cts-2 7m 371Mi ds-idrepo-0 1069m 13805Mi ds-idrepo-1 14m 13634Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 941m 3429Mi idm-65858d8c4c-8ff69 649m 3575Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 94m 511Mi 17:48:17 DEBUG --- stderr --- 17:48:17 DEBUG 17:48:17 INFO 17:48:17 INFO [loop_until]: kubectl --namespace=xlou top node 17:48:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:48:18 INFO [loop_until]: OK (rc = 0) 17:48:18 DEBUG --- stdout --- 17:48:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5413Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5497Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 731m 4% 4862Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 197m 1% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 989m 6% 4387Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1072m 6% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14121Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14179Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 167m 1% 2026Mi 3% 17:48:18 DEBUG --- stderr --- 17:48:18 DEBUG 17:49:17 INFO 17:49:17 INFO [loop_until]: kubectl --namespace=xlou top pods 17:49:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:49:17 INFO [loop_until]: OK (rc = 0) 17:49:17 DEBUG --- stdout --- 17:49:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4451Mi am-55f77847b7-ngpns 9m 4449Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 7m 409Mi ds-cts-1 7m 377Mi ds-cts-2 7m 371Mi ds-idrepo-0 877m 13805Mi ds-idrepo-1 16m 13635Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 811m 3432Mi idm-65858d8c4c-8ff69 650m 3579Mi lodemon-56989b88bb-nm2fw 4m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 107m 524Mi 17:49:17 DEBUG --- stderr --- 17:49:17 DEBUG 17:49:18 INFO 17:49:18 INFO [loop_until]: kubectl --namespace=xlou top node 17:49:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:49:18 INFO [loop_until]: OK (rc = 0) 17:49:18 DEBUG --- stdout --- 17:49:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5411Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5502Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 731m 4% 4868Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 187m 1% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 880m 5% 4389Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 953m 5% 14345Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14125Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 14180Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 176m 1% 2041Mi 3% 17:49:18 DEBUG --- stderr --- 17:49:18 DEBUG 17:50:17 INFO 17:50:17 INFO [loop_until]: kubectl --namespace=xlou top pods 17:50:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:50:17 INFO [loop_until]: OK (rc = 0) 17:50:17 DEBUG --- stdout --- 17:50:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4451Mi am-55f77847b7-ngpns 8m 4449Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 15m 409Mi ds-cts-1 7m 377Mi ds-cts-2 9m 372Mi ds-idrepo-0 10m 13823Mi ds-idrepo-1 13m 13635Mi ds-idrepo-2 9m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 3432Mi idm-65858d8c4c-8ff69 7m 3579Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 30m 524Mi 17:50:17 DEBUG --- stderr --- 17:50:17 DEBUG 17:50:18 INFO 17:50:18 INFO [loop_until]: kubectl --namespace=xlou top node 17:50:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:50:18 INFO [loop_until]: OK (rc = 0) 17:50:18 DEBUG --- stdout --- 17:50:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1257Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5410Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5501Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4869Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 119m 0% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 4391Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 71m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 67m 0% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14125Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14179Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 98m 0% 2041Mi 3% 17:50:18 DEBUG --- stderr --- 17:50:18 DEBUG 17:51:17 INFO 17:51:17 INFO [loop_until]: kubectl --namespace=xlou top pods 17:51:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:51:17 INFO [loop_until]: OK (rc = 0) 17:51:17 DEBUG --- stdout --- 17:51:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4451Mi am-55f77847b7-ngpns 8m 4449Mi am-55f77847b7-q6zcv 8m 4467Mi ds-cts-0 10m 410Mi ds-cts-1 7m 377Mi ds-cts-2 7m 370Mi ds-idrepo-0 11m 13822Mi ds-idrepo-1 13m 13635Mi ds-idrepo-2 9m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 3432Mi idm-65858d8c4c-8ff69 7m 3579Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 101Mi 17:51:17 DEBUG --- stderr --- 17:51:17 DEBUG 17:51:18 INFO 17:51:18 INFO [loop_until]: kubectl --namespace=xlou top node 17:51:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:51:18 INFO [loop_until]: OK (rc = 0) 17:51:18 DEBUG --- stdout --- 17:51:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5413Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5541Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5494Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 4869Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2131Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 66m 0% 4391Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14127Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14179Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1626Mi 2% 17:51:18 DEBUG --- stderr --- 17:51:18 DEBUG 127.0.0.1 - - [11/Aug/2023 17:51:29] "GET /monitoring/average?start_time=23-08-11_16:20:57&stop_time=23-08-11_16:49:28 HTTP/1.1" 200 - 17:52:17 INFO 17:52:17 INFO [loop_until]: kubectl --namespace=xlou top pods 17:52:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:52:17 INFO [loop_until]: OK (rc = 0) 17:52:17 DEBUG --- stdout --- 17:52:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4451Mi am-55f77847b7-ngpns 13m 4451Mi am-55f77847b7-q6zcv 9m 4467Mi ds-cts-0 8m 409Mi ds-cts-1 7m 378Mi ds-cts-2 8m 371Mi ds-idrepo-0 10m 13822Mi ds-idrepo-1 12m 13634Mi ds-idrepo-2 9m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 3431Mi idm-65858d8c4c-8ff69 63m 3579Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 469m 399Mi 17:52:17 DEBUG --- stderr --- 17:52:17 DEBUG 17:52:18 INFO 17:52:18 INFO [loop_until]: kubectl --namespace=xlou top node 17:52:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:52:18 INFO [loop_until]: OK (rc = 0) 17:52:18 DEBUG --- stdout --- 17:52:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 84m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5408Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5541Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 5496Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 145m 0% 4869Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 132m 0% 2120Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 113m 0% 4392Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 69m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 96m 0% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14122Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14179Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 619m 3% 1916Mi 3% 17:52:18 DEBUG --- stderr --- 17:52:18 DEBUG 17:53:17 INFO 17:53:17 INFO [loop_until]: kubectl --namespace=xlou top pods 17:53:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:53:17 INFO [loop_until]: OK (rc = 0) 17:53:17 DEBUG --- stdout --- 17:53:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4451Mi am-55f77847b7-ngpns 9m 4450Mi am-55f77847b7-q6zcv 18m 4467Mi ds-cts-0 6m 409Mi ds-cts-1 9m 378Mi ds-cts-2 8m 371Mi ds-idrepo-0 1843m 13823Mi ds-idrepo-1 12m 13634Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1460m 3435Mi idm-65858d8c4c-8ff69 1148m 3583Mi lodemon-56989b88bb-nm2fw 7m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 259m 514Mi 17:53:17 DEBUG --- stderr --- 17:53:17 DEBUG 17:53:18 INFO 17:53:18 INFO [loop_until]: kubectl --namespace=xlou top node 17:53:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:53:18 INFO [loop_until]: OK (rc = 0) 17:53:18 DEBUG --- stdout --- 17:53:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5411Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 75m 0% 5541Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5497Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1209m 7% 4871Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 235m 1% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1568m 9% 4392Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1931m 12% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14125Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14180Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 325m 2% 2032Mi 3% 17:53:18 DEBUG --- stderr --- 17:53:18 DEBUG 17:54:18 INFO 17:54:18 INFO [loop_until]: kubectl --namespace=xlou top pods 17:54:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:54:18 INFO [loop_until]: OK (rc = 0) 17:54:18 DEBUG --- stdout --- 17:54:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4451Mi am-55f77847b7-ngpns 8m 4450Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 7m 409Mi ds-cts-1 8m 377Mi ds-cts-2 7m 371Mi ds-idrepo-0 1607m 13822Mi ds-idrepo-1 12m 13634Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1375m 3437Mi idm-65858d8c4c-8ff69 970m 3585Mi lodemon-56989b88bb-nm2fw 1m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 190m 517Mi 17:54:18 DEBUG --- stderr --- 17:54:18 DEBUG 17:54:18 INFO 17:54:18 INFO [loop_until]: kubectl --namespace=xlou top node 17:54:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:54:18 INFO [loop_until]: OK (rc = 0) 17:54:18 DEBUG --- stdout --- 17:54:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5409Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5537Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5500Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1014m 6% 4873Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 231m 1% 2129Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1397m 8% 4397Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1646m 10% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14127Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14181Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 255m 1% 2034Mi 3% 17:54:18 DEBUG --- stderr --- 17:54:18 DEBUG 17:55:18 INFO 17:55:18 INFO [loop_until]: kubectl --namespace=xlou top pods 17:55:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:55:18 INFO [loop_until]: OK (rc = 0) 17:55:18 DEBUG --- stdout --- 17:55:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4451Mi am-55f77847b7-ngpns 8m 4450Mi am-55f77847b7-q6zcv 6m 4467Mi ds-cts-0 6m 409Mi ds-cts-1 8m 377Mi ds-cts-2 8m 371Mi ds-idrepo-0 1501m 13817Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1154m 3440Mi idm-65858d8c4c-8ff69 984m 3588Mi lodemon-56989b88bb-nm2fw 1m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 168m 516Mi 17:55:18 DEBUG --- stderr --- 17:55:18 DEBUG 17:55:18 INFO 17:55:18 INFO [loop_until]: kubectl --namespace=xlou top node 17:55:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:55:18 INFO [loop_until]: OK (rc = 0) 17:55:18 DEBUG --- stdout --- 17:55:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5408Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 5538Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5513Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1056m 6% 4876Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 226m 1% 2124Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1279m 8% 4396Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1550m 9% 14363Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14126Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14178Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 235m 1% 2035Mi 3% 17:55:18 DEBUG --- stderr --- 17:55:18 DEBUG 17:56:18 INFO 17:56:18 INFO [loop_until]: kubectl --namespace=xlou top pods 17:56:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:56:18 INFO [loop_until]: OK (rc = 0) 17:56:18 DEBUG --- stdout --- 17:56:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4451Mi am-55f77847b7-ngpns 8m 4450Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 6m 409Mi ds-cts-1 10m 378Mi ds-cts-2 6m 371Mi ds-idrepo-0 1672m 13822Mi ds-idrepo-1 11m 13634Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1426m 3444Mi idm-65858d8c4c-8ff69 1128m 3589Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 185m 518Mi 17:56:18 DEBUG --- stderr --- 17:56:18 DEBUG 17:56:18 INFO 17:56:18 INFO [loop_until]: kubectl --namespace=xlou top node 17:56:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:56:18 INFO [loop_until]: OK (rc = 0) 17:56:18 DEBUG --- stdout --- 17:56:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5409Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5502Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1245m 7% 4879Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 232m 1% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1483m 9% 4406Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1761m 11% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14128Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14184Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 244m 1% 2037Mi 3% 17:56:18 DEBUG --- stderr --- 17:56:18 DEBUG 17:57:18 INFO 17:57:18 INFO [loop_until]: kubectl --namespace=xlou top pods 17:57:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:57:18 INFO [loop_until]: OK (rc = 0) 17:57:18 DEBUG --- stdout --- 17:57:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4451Mi am-55f77847b7-ngpns 8m 4450Mi am-55f77847b7-q6zcv 8m 4467Mi ds-cts-0 6m 409Mi ds-cts-1 7m 377Mi ds-cts-2 7m 371Mi ds-idrepo-0 1496m 13813Mi ds-idrepo-1 11m 13634Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 1346m 3447Mi idm-65858d8c4c-8ff69 1030m 3592Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 143m 518Mi 17:57:18 DEBUG --- stderr --- 17:57:18 DEBUG 17:57:18 INFO 17:57:18 INFO [loop_until]: kubectl --namespace=xlou top node 17:57:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:57:18 INFO [loop_until]: OK (rc = 0) 17:57:18 DEBUG --- stdout --- 17:57:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5413Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5537Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5503Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1162m 7% 4880Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 236m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1428m 8% 4408Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1597m 10% 14345Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14126Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14182Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 216m 1% 2039Mi 3% 17:57:18 DEBUG --- stderr --- 17:57:18 DEBUG 17:58:18 INFO 17:58:18 INFO [loop_until]: kubectl --namespace=xlou top pods 17:58:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:58:18 INFO [loop_until]: OK (rc = 0) 17:58:18 DEBUG --- stdout --- 17:58:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4452Mi am-55f77847b7-ngpns 9m 4454Mi am-55f77847b7-q6zcv 8m 4467Mi ds-cts-0 6m 409Mi ds-cts-1 8m 377Mi ds-cts-2 7m 371Mi ds-idrepo-0 1651m 13807Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1585m 3450Mi idm-65858d8c4c-8ff69 985m 3588Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 141m 518Mi 17:58:18 DEBUG --- stderr --- 17:58:18 DEBUG 17:58:19 INFO 17:58:19 INFO [loop_until]: kubectl --namespace=xlou top node 17:58:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:58:19 INFO [loop_until]: OK (rc = 0) 17:58:19 DEBUG --- stdout --- 17:58:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5412Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5541Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5507Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1104m 6% 4875Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 234m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1601m 10% 4410Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1763m 11% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14126Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14185Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 210m 1% 2034Mi 3% 17:58:19 DEBUG --- stderr --- 17:58:19 DEBUG 17:59:18 INFO 17:59:18 INFO [loop_until]: kubectl --namespace=xlou top pods 17:59:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:59:18 INFO [loop_until]: OK (rc = 0) 17:59:18 DEBUG --- stdout --- 17:59:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4446Mi am-55f77847b7-ngpns 8m 4454Mi am-55f77847b7-q6zcv 7m 4467Mi ds-cts-0 6m 409Mi ds-cts-1 9m 380Mi ds-cts-2 7m 371Mi ds-idrepo-0 1410m 13822Mi ds-idrepo-1 16m 13635Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1306m 3451Mi idm-65858d8c4c-8ff69 1012m 3589Mi lodemon-56989b88bb-nm2fw 5m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 128m 519Mi 17:59:18 DEBUG --- stderr --- 17:59:18 DEBUG 17:59:19 INFO 17:59:19 INFO [loop_until]: kubectl --namespace=xlou top node 17:59:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:59:19 INFO [loop_until]: OK (rc = 0) 17:59:19 DEBUG --- stdout --- 17:59:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1267Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 5407Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5541Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5508Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1151m 7% 4879Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 231m 1% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1308m 8% 4412Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1598m 10% 14343Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14126Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14181Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 198m 1% 2038Mi 3% 17:59:19 DEBUG --- stderr --- 17:59:19 DEBUG 18:00:18 INFO 18:00:18 INFO [loop_until]: kubectl --namespace=xlou top pods 18:00:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:00:18 INFO [loop_until]: OK (rc = 0) 18:00:18 DEBUG --- stdout --- 18:00:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4446Mi am-55f77847b7-ngpns 12m 4454Mi am-55f77847b7-q6zcv 7m 4468Mi ds-cts-0 7m 409Mi ds-cts-1 7m 380Mi ds-cts-2 8m 371Mi ds-idrepo-0 1478m 13801Mi ds-idrepo-1 12m 13634Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1310m 3452Mi idm-65858d8c4c-8ff69 1139m 3592Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 128m 519Mi 18:00:18 DEBUG --- stderr --- 18:00:18 DEBUG 18:00:19 INFO 18:00:19 INFO [loop_until]: kubectl --namespace=xlou top node 18:00:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:00:19 INFO [loop_until]: OK (rc = 0) 18:00:19 DEBUG --- stdout --- 18:00:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5418Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5542Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5506Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1066m 6% 4878Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 232m 1% 2131Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1371m 8% 4413Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1515m 9% 14343Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14128Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14183Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 196m 1% 2036Mi 3% 18:00:19 DEBUG --- stderr --- 18:00:19 DEBUG 18:01:18 INFO 18:01:18 INFO [loop_until]: kubectl --namespace=xlou top pods 18:01:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:01:18 INFO [loop_until]: OK (rc = 0) 18:01:18 DEBUG --- stdout --- 18:01:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4446Mi am-55f77847b7-ngpns 12m 4454Mi am-55f77847b7-q6zcv 8m 4468Mi ds-cts-0 5m 409Mi ds-cts-1 7m 380Mi ds-cts-2 8m 372Mi ds-idrepo-0 1617m 13822Mi ds-idrepo-1 11m 13634Mi ds-idrepo-2 16m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 1357m 3455Mi idm-65858d8c4c-8ff69 1111m 3593Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 132m 520Mi 18:01:18 DEBUG --- stderr --- 18:01:18 DEBUG 18:01:19 INFO 18:01:19 INFO [loop_until]: kubectl --namespace=xlou top node 18:01:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:01:19 INFO [loop_until]: OK (rc = 0) 18:01:19 DEBUG --- stdout --- 18:01:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5408Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5537Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5502Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1309m 8% 4880Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 221m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1377m 8% 4416Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1615m 10% 14338Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 71m 0% 14128Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 14187Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 194m 1% 2037Mi 3% 18:01:19 DEBUG --- stderr --- 18:01:19 DEBUG 18:02:18 INFO 18:02:18 INFO [loop_until]: kubectl --namespace=xlou top pods 18:02:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:02:18 INFO [loop_until]: OK (rc = 0) 18:02:18 DEBUG --- stdout --- 18:02:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4447Mi am-55f77847b7-ngpns 8m 4454Mi am-55f77847b7-q6zcv 7m 4472Mi ds-cts-0 6m 409Mi ds-cts-1 7m 380Mi ds-cts-2 8m 371Mi ds-idrepo-0 1605m 13827Mi ds-idrepo-1 11m 13627Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1391m 3456Mi idm-65858d8c4c-8ff69 1044m 3594Mi lodemon-56989b88bb-nm2fw 7m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 131m 520Mi 18:02:18 DEBUG --- stderr --- 18:02:18 DEBUG 18:02:19 INFO 18:02:19 INFO [loop_until]: kubectl --namespace=xlou top node 18:02:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:02:19 INFO [loop_until]: OK (rc = 0) 18:02:19 DEBUG --- stdout --- 18:02:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5408Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5545Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5504Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1054m 6% 4879Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 231m 1% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1380m 8% 4419Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1688m 10% 14367Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14128Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14177Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 200m 1% 2033Mi 3% 18:02:19 DEBUG --- stderr --- 18:02:19 DEBUG 18:03:18 INFO 18:03:18 INFO [loop_until]: kubectl --namespace=xlou top pods 18:03:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:03:19 INFO [loop_until]: OK (rc = 0) 18:03:19 DEBUG --- stdout --- 18:03:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4448Mi am-55f77847b7-ngpns 8m 4454Mi am-55f77847b7-q6zcv 7m 4472Mi ds-cts-0 5m 409Mi ds-cts-1 7m 380Mi ds-cts-2 7m 371Mi ds-idrepo-0 1746m 13823Mi ds-idrepo-1 10m 13627Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 1541m 3458Mi idm-65858d8c4c-8ff69 1073m 3613Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 129m 520Mi 18:03:19 DEBUG --- stderr --- 18:03:19 DEBUG 18:03:19 INFO 18:03:19 INFO [loop_until]: kubectl --namespace=xlou top node 18:03:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:03:19 INFO [loop_until]: OK (rc = 0) 18:03:19 DEBUG --- stdout --- 18:03:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5403Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5544Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5502Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1029m 6% 4878Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 228m 1% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1678m 10% 4418Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1784m 11% 14348Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14127Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14176Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 205m 1% 2038Mi 3% 18:03:19 DEBUG --- stderr --- 18:03:19 DEBUG 18:04:19 INFO 18:04:19 INFO [loop_until]: kubectl --namespace=xlou top pods 18:04:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:04:19 INFO [loop_until]: OK (rc = 0) 18:04:19 DEBUG --- stdout --- 18:04:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4447Mi am-55f77847b7-ngpns 8m 4454Mi am-55f77847b7-q6zcv 8m 4472Mi ds-cts-0 5m 409Mi ds-cts-1 7m 380Mi ds-cts-2 8m 371Mi ds-idrepo-0 1691m 13822Mi ds-idrepo-1 9m 13627Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1385m 3465Mi idm-65858d8c4c-8ff69 1213m 3600Mi lodemon-56989b88bb-nm2fw 7m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 136m 520Mi 18:04:19 DEBUG --- stderr --- 18:04:19 DEBUG 18:04:19 INFO 18:04:19 INFO [loop_until]: kubectl --namespace=xlou top node 18:04:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:04:19 INFO [loop_until]: OK (rc = 0) 18:04:19 DEBUG --- stdout --- 18:04:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5407Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 5545Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5503Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1340m 8% 4887Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 230m 1% 2126Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1403m 8% 4422Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1666m 10% 14340Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14128Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14177Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 203m 1% 2038Mi 3% 18:04:19 DEBUG --- stderr --- 18:04:19 DEBUG 18:05:19 INFO 18:05:19 INFO [loop_until]: kubectl --namespace=xlou top pods 18:05:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:05:19 INFO [loop_until]: OK (rc = 0) 18:05:19 DEBUG --- stdout --- 18:05:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4447Mi am-55f77847b7-ngpns 7m 4454Mi am-55f77847b7-q6zcv 7m 4472Mi ds-cts-0 5m 409Mi ds-cts-1 7m 380Mi ds-cts-2 8m 371Mi ds-idrepo-0 1807m 13800Mi ds-idrepo-1 18m 13631Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1704m 3478Mi idm-65858d8c4c-8ff69 1152m 3602Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 144m 519Mi 18:05:19 DEBUG --- stderr --- 18:05:19 DEBUG 18:05:19 INFO 18:05:19 INFO [loop_until]: kubectl --namespace=xlou top node 18:05:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:05:19 INFO [loop_until]: OK (rc = 0) 18:05:19 DEBUG --- stdout --- 18:05:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 5408Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5502Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1212m 7% 4887Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 235m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1599m 10% 4437Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1709m 10% 14349Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14128Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 14177Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 205m 1% 2037Mi 3% 18:05:19 DEBUG --- stderr --- 18:05:19 DEBUG 18:06:19 INFO 18:06:19 INFO [loop_until]: kubectl --namespace=xlou top pods 18:06:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:06:19 INFO [loop_until]: OK (rc = 0) 18:06:19 DEBUG --- stdout --- 18:06:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4448Mi am-55f77847b7-ngpns 8m 4454Mi am-55f77847b7-q6zcv 8m 4473Mi ds-cts-0 5m 409Mi ds-cts-1 7m 380Mi ds-cts-2 7m 373Mi ds-idrepo-0 1571m 13800Mi ds-idrepo-1 18m 13631Mi ds-idrepo-2 19m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1539m 3463Mi idm-65858d8c4c-8ff69 1017m 3609Mi lodemon-56989b88bb-nm2fw 5m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 128m 520Mi 18:06:19 DEBUG --- stderr --- 18:06:19 DEBUG 18:06:19 INFO 18:06:19 INFO [loop_until]: kubectl --namespace=xlou top node 18:06:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:06:20 INFO [loop_until]: OK (rc = 0) 18:06:20 DEBUG --- stdout --- 18:06:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5406Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5501Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1076m 6% 4892Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 232m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1564m 9% 4423Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1625m 10% 14347Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 14128Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14182Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 204m 1% 2039Mi 3% 18:06:20 DEBUG --- stderr --- 18:06:20 DEBUG 18:07:19 INFO 18:07:19 INFO [loop_until]: kubectl --namespace=xlou top pods 18:07:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:07:19 INFO [loop_until]: OK (rc = 0) 18:07:19 DEBUG --- stdout --- 18:07:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4447Mi am-55f77847b7-ngpns 8m 4454Mi am-55f77847b7-q6zcv 8m 4473Mi ds-cts-0 6m 409Mi ds-cts-1 7m 380Mi ds-cts-2 10m 373Mi ds-idrepo-0 1496m 13826Mi ds-idrepo-1 13m 13631Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1450m 3465Mi idm-65858d8c4c-8ff69 898m 3610Mi lodemon-56989b88bb-nm2fw 4m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 131m 520Mi 18:07:19 DEBUG --- stderr --- 18:07:19 DEBUG 18:07:20 INFO 18:07:20 INFO [loop_until]: kubectl --namespace=xlou top node 18:07:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:07:20 INFO [loop_until]: OK (rc = 0) 18:07:20 DEBUG --- stdout --- 18:07:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 84m 0% 1273Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5406Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5500Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1009m 6% 4898Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 228m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1435m 9% 4425Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1472m 9% 14348Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14129Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14183Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 202m 1% 2040Mi 3% 18:07:20 DEBUG --- stderr --- 18:07:20 DEBUG 18:08:19 INFO 18:08:19 INFO [loop_until]: kubectl --namespace=xlou top pods 18:08:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:08:19 INFO [loop_until]: OK (rc = 0) 18:08:19 DEBUG --- stdout --- 18:08:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 12m 4448Mi am-55f77847b7-ngpns 18m 4460Mi am-55f77847b7-q6zcv 8m 4473Mi ds-cts-0 5m 410Mi ds-cts-1 7m 381Mi ds-cts-2 7m 373Mi ds-idrepo-0 1480m 13804Mi ds-idrepo-1 22m 13633Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1422m 3470Mi idm-65858d8c4c-8ff69 1031m 3612Mi lodemon-56989b88bb-nm2fw 4m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 130m 520Mi 18:08:19 DEBUG --- stderr --- 18:08:19 DEBUG 18:08:20 INFO 18:08:20 INFO [loop_until]: kubectl --namespace=xlou top node 18:08:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:08:20 INFO [loop_until]: OK (rc = 0) 18:08:20 DEBUG --- stdout --- 18:08:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5408Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5544Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 5509Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1134m 7% 4895Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 231m 1% 2136Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1444m 9% 4424Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1700m 10% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14130Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 14186Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 203m 1% 2039Mi 3% 18:08:20 DEBUG --- stderr --- 18:08:20 DEBUG 18:09:19 INFO 18:09:19 INFO [loop_until]: kubectl --namespace=xlou top pods 18:09:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:09:19 INFO [loop_until]: OK (rc = 0) 18:09:19 DEBUG --- stdout --- 18:09:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 11m 4448Mi am-55f77847b7-ngpns 8m 4451Mi am-55f77847b7-q6zcv 9m 4473Mi ds-cts-0 6m 409Mi ds-cts-1 7m 380Mi ds-cts-2 9m 373Mi ds-idrepo-0 1591m 13810Mi ds-idrepo-1 11m 13634Mi ds-idrepo-2 13m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1393m 3469Mi idm-65858d8c4c-8ff69 1248m 3608Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 131m 520Mi 18:09:19 DEBUG --- stderr --- 18:09:19 DEBUG 18:09:20 INFO 18:09:20 INFO [loop_until]: kubectl --namespace=xlou top node 18:09:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:09:20 INFO [loop_until]: OK (rc = 0) 18:09:20 DEBUG --- stdout --- 18:09:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5410Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5503Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1314m 8% 4892Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 238m 1% 2136Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1373m 8% 4422Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1707m 10% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 70m 0% 14131Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14187Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 212m 1% 2051Mi 3% 18:09:20 DEBUG --- stderr --- 18:09:20 DEBUG 18:10:19 INFO 18:10:19 INFO [loop_until]: kubectl --namespace=xlou top pods 18:10:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:10:19 INFO [loop_until]: OK (rc = 0) 18:10:19 DEBUG --- stdout --- 18:10:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4448Mi am-55f77847b7-ngpns 8m 4451Mi am-55f77847b7-q6zcv 7m 4473Mi ds-cts-0 6m 409Mi ds-cts-1 7m 381Mi ds-cts-2 6m 373Mi ds-idrepo-0 1758m 13803Mi ds-idrepo-1 12m 13629Mi ds-idrepo-2 10m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1556m 3474Mi idm-65858d8c4c-8ff69 1137m 3611Mi lodemon-56989b88bb-nm2fw 7m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 147m 520Mi 18:10:19 DEBUG --- stderr --- 18:10:19 DEBUG 18:10:20 INFO 18:10:20 INFO [loop_until]: kubectl --namespace=xlou top node 18:10:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:10:20 INFO [loop_until]: OK (rc = 0) 18:10:20 DEBUG --- stdout --- 18:10:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5410Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5545Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5503Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1238m 7% 4894Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 229m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1593m 10% 4430Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1846m 11% 14347Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14131Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14181Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 212m 1% 2041Mi 3% 18:10:20 DEBUG --- stderr --- 18:10:20 DEBUG 18:11:19 INFO 18:11:19 INFO [loop_until]: kubectl --namespace=xlou top pods 18:11:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:11:19 INFO [loop_until]: OK (rc = 0) 18:11:19 DEBUG --- stdout --- 18:11:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4448Mi am-55f77847b7-ngpns 8m 4451Mi am-55f77847b7-q6zcv 20m 4473Mi ds-cts-0 6m 409Mi ds-cts-1 7m 380Mi ds-cts-2 11m 373Mi ds-idrepo-0 1571m 13807Mi ds-idrepo-1 11m 13629Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1393m 3475Mi idm-65858d8c4c-8ff69 1578m 3612Mi lodemon-56989b88bb-nm2fw 7m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 147m 521Mi 18:11:19 DEBUG --- stderr --- 18:11:19 DEBUG 18:11:20 INFO 18:11:20 INFO [loop_until]: kubectl --namespace=xlou top node 18:11:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:11:20 INFO [loop_until]: OK (rc = 0) 18:11:20 DEBUG --- stdout --- 18:11:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5407Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 75m 0% 5545Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5499Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1582m 9% 4895Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 239m 1% 2131Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1437m 9% 4431Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1796m 11% 14340Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14129Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14183Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 211m 1% 2040Mi 3% 18:11:20 DEBUG --- stderr --- 18:11:20 DEBUG 18:12:19 INFO 18:12:19 INFO [loop_until]: kubectl --namespace=xlou top pods 18:12:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:12:19 INFO [loop_until]: OK (rc = 0) 18:12:19 DEBUG --- stdout --- 18:12:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 14m 4449Mi am-55f77847b7-ngpns 15m 4457Mi am-55f77847b7-q6zcv 8m 4473Mi ds-cts-0 6m 409Mi ds-cts-1 7m 380Mi ds-cts-2 7m 374Mi ds-idrepo-0 1555m 13822Mi ds-idrepo-1 11m 13629Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1334m 3476Mi idm-65858d8c4c-8ff69 1132m 3618Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 131m 521Mi 18:12:19 DEBUG --- stderr --- 18:12:19 DEBUG 18:12:20 INFO 18:12:20 INFO [loop_until]: kubectl --namespace=xlou top node 18:12:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:12:20 INFO [loop_until]: OK (rc = 0) 18:12:20 DEBUG --- stdout --- 18:12:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5410Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5541Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 73m 0% 5503Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1147m 7% 4915Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 228m 1% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1419m 8% 4435Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1619m 10% 14352Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14128Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14181Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 201m 1% 2036Mi 3% 18:12:20 DEBUG --- stderr --- 18:12:20 DEBUG 18:13:20 INFO 18:13:20 INFO [loop_until]: kubectl --namespace=xlou top pods 18:13:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:13:20 INFO [loop_until]: OK (rc = 0) 18:13:20 DEBUG --- stdout --- 18:13:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 14m 4449Mi am-55f77847b7-ngpns 8m 4451Mi am-55f77847b7-q6zcv 8m 4474Mi ds-cts-0 6m 410Mi ds-cts-1 7m 380Mi ds-cts-2 7m 374Mi ds-idrepo-0 1372m 13828Mi ds-idrepo-1 11m 13629Mi ds-idrepo-2 10m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1125m 3479Mi idm-65858d8c4c-8ff69 891m 3620Mi lodemon-56989b88bb-nm2fw 7m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 123m 521Mi 18:13:20 DEBUG --- stderr --- 18:13:20 DEBUG 18:13:20 INFO 18:13:20 INFO [loop_until]: kubectl --namespace=xlou top node 18:13:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:13:20 INFO [loop_until]: OK (rc = 0) 18:13:20 DEBUG --- stdout --- 18:13:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5407Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5545Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5501Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 985m 6% 4906Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 218m 1% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1234m 7% 4448Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1267m 7% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14130Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14183Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 197m 1% 2039Mi 3% 18:13:20 DEBUG --- stderr --- 18:13:20 DEBUG 18:14:20 INFO 18:14:20 INFO [loop_until]: kubectl --namespace=xlou top pods 18:14:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:14:20 INFO [loop_until]: OK (rc = 0) 18:14:20 DEBUG --- stdout --- 18:14:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4449Mi am-55f77847b7-ngpns 9m 4451Mi am-55f77847b7-q6zcv 8m 4473Mi ds-cts-0 6m 410Mi ds-cts-1 7m 380Mi ds-cts-2 7m 373Mi ds-idrepo-0 1501m 13811Mi ds-idrepo-1 11m 13629Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1426m 3481Mi idm-65858d8c4c-8ff69 1130m 3617Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 132m 521Mi 18:14:20 DEBUG --- stderr --- 18:14:20 DEBUG 18:14:20 INFO 18:14:20 INFO [loop_until]: kubectl --namespace=xlou top node 18:14:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:14:20 INFO [loop_until]: OK (rc = 0) 18:14:20 DEBUG --- stdout --- 18:14:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 5407Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5500Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1182m 7% 4898Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 232m 1% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1426m 8% 4438Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1654m 10% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14130Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14180Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 207m 1% 2040Mi 3% 18:14:20 DEBUG --- stderr --- 18:14:20 DEBUG 18:15:20 INFO 18:15:20 INFO [loop_until]: kubectl --namespace=xlou top pods 18:15:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:15:20 INFO [loop_until]: OK (rc = 0) 18:15:20 DEBUG --- stdout --- 18:15:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4449Mi am-55f77847b7-ngpns 16m 4451Mi am-55f77847b7-q6zcv 11m 4474Mi ds-cts-0 6m 409Mi ds-cts-1 7m 380Mi ds-cts-2 7m 373Mi ds-idrepo-0 1638m 13800Mi ds-idrepo-1 11m 13629Mi ds-idrepo-2 13m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1509m 3503Mi idm-65858d8c4c-8ff69 1181m 3618Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 139m 522Mi 18:15:20 DEBUG --- stderr --- 18:15:20 DEBUG 18:15:20 INFO 18:15:20 INFO [loop_until]: kubectl --namespace=xlou top node 18:15:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:15:21 INFO [loop_until]: OK (rc = 0) 18:15:21 DEBUG --- stdout --- 18:15:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5405Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5544Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5500Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1287m 8% 4902Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 238m 1% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1611m 10% 4459Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1736m 10% 14352Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 14129Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 14195Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 217m 1% 2041Mi 3% 18:15:21 DEBUG --- stderr --- 18:15:21 DEBUG 18:16:20 INFO 18:16:20 INFO [loop_until]: kubectl --namespace=xlou top pods 18:16:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:16:20 INFO [loop_until]: OK (rc = 0) 18:16:20 DEBUG --- stdout --- 18:16:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4449Mi am-55f77847b7-ngpns 8m 4451Mi am-55f77847b7-q6zcv 7m 4474Mi ds-cts-0 6m 410Mi ds-cts-1 7m 380Mi ds-cts-2 7m 374Mi ds-idrepo-0 1559m 13806Mi ds-idrepo-1 12m 13630Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1377m 3485Mi idm-65858d8c4c-8ff69 1089m 3621Mi lodemon-56989b88bb-nm2fw 2m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 130m 522Mi 18:16:20 DEBUG --- stderr --- 18:16:20 DEBUG 18:16:21 INFO 18:16:21 INFO [loop_until]: kubectl --namespace=xlou top node 18:16:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:16:21 INFO [loop_until]: OK (rc = 0) 18:16:21 DEBUG --- stdout --- 18:16:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5406Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5499Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1148m 7% 4906Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 232m 1% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1389m 8% 4442Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1651m 10% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14130Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14184Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 199m 1% 2040Mi 3% 18:16:21 DEBUG --- stderr --- 18:16:21 DEBUG 18:17:20 INFO 18:17:20 INFO [loop_until]: kubectl --namespace=xlou top pods 18:17:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:17:20 INFO [loop_until]: OK (rc = 0) 18:17:20 DEBUG --- stdout --- 18:17:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4449Mi am-55f77847b7-ngpns 8m 4451Mi am-55f77847b7-q6zcv 15m 4474Mi ds-cts-0 6m 410Mi ds-cts-1 7m 381Mi ds-cts-2 7m 374Mi ds-idrepo-0 1415m 13822Mi ds-idrepo-1 11m 13630Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1373m 3493Mi idm-65858d8c4c-8ff69 868m 3622Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 119m 522Mi 18:17:20 DEBUG --- stderr --- 18:17:20 DEBUG 18:17:21 INFO 18:17:21 INFO [loop_until]: kubectl --namespace=xlou top node 18:17:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:17:21 INFO [loop_until]: OK (rc = 0) 18:17:21 DEBUG --- stdout --- 18:17:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5407Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5502Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 932m 5% 4908Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 228m 1% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1472m 9% 4441Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1441m 9% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14129Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14185Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 193m 1% 2041Mi 3% 18:17:21 DEBUG --- stderr --- 18:17:21 DEBUG 18:18:20 INFO 18:18:20 INFO [loop_until]: kubectl --namespace=xlou top pods 18:18:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:18:20 INFO [loop_until]: OK (rc = 0) 18:18:20 DEBUG --- stdout --- 18:18:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4449Mi am-55f77847b7-ngpns 8m 4451Mi am-55f77847b7-q6zcv 7m 4473Mi ds-cts-0 6m 409Mi ds-cts-1 7m 380Mi ds-cts-2 7m 373Mi ds-idrepo-0 1616m 13806Mi ds-idrepo-1 11m 13629Mi ds-idrepo-2 21m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 1453m 3494Mi idm-65858d8c4c-8ff69 1171m 3623Mi lodemon-56989b88bb-nm2fw 8m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 132m 522Mi 18:18:20 DEBUG --- stderr --- 18:18:20 DEBUG 18:18:21 INFO 18:18:21 INFO [loop_until]: kubectl --namespace=xlou top node 18:18:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:18:21 INFO [loop_until]: OK (rc = 0) 18:18:21 DEBUG --- stdout --- 18:18:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5409Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5503Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1266m 7% 4908Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 234m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1563m 9% 4450Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1638m 10% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 76m 0% 14129Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14184Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 208m 1% 2041Mi 3% 18:18:21 DEBUG --- stderr --- 18:18:21 DEBUG 18:19:20 INFO 18:19:20 INFO [loop_until]: kubectl --namespace=xlou top pods 18:19:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:19:20 INFO [loop_until]: OK (rc = 0) 18:19:20 DEBUG --- stdout --- 18:19:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4449Mi am-55f77847b7-ngpns 9m 4451Mi am-55f77847b7-q6zcv 10m 4474Mi ds-cts-0 6m 409Mi ds-cts-1 7m 380Mi ds-cts-2 7m 375Mi ds-idrepo-0 1507m 13808Mi ds-idrepo-1 11m 13629Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1385m 3499Mi idm-65858d8c4c-8ff69 1212m 3625Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 131m 522Mi 18:19:20 DEBUG --- stderr --- 18:19:20 DEBUG 18:19:21 INFO 18:19:21 INFO [loop_until]: kubectl --namespace=xlou top node 18:19:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:19:21 INFO [loop_until]: OK (rc = 0) 18:19:21 DEBUG --- stdout --- 18:19:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5409Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5502Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1196m 7% 4912Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 227m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1477m 9% 4454Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1672m 10% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14129Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14186Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 199m 1% 2042Mi 3% 18:19:21 DEBUG --- stderr --- 18:19:21 DEBUG 18:20:20 INFO 18:20:20 INFO [loop_until]: kubectl --namespace=xlou top pods 18:20:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:20:20 INFO [loop_until]: OK (rc = 0) 18:20:20 DEBUG --- stdout --- 18:20:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4449Mi am-55f77847b7-ngpns 9m 4451Mi am-55f77847b7-q6zcv 7m 4474Mi ds-cts-0 8m 409Mi ds-cts-1 9m 381Mi ds-cts-2 7m 374Mi ds-idrepo-0 1633m 13822Mi ds-idrepo-1 11m 13629Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1474m 3500Mi idm-65858d8c4c-8ff69 995m 3628Mi lodemon-56989b88bb-nm2fw 5m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 129m 523Mi 18:20:20 DEBUG --- stderr --- 18:20:20 DEBUG 18:20:21 INFO 18:20:21 INFO [loop_until]: kubectl --namespace=xlou top node 18:20:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:20:21 INFO [loop_until]: OK (rc = 0) 18:20:21 DEBUG --- stdout --- 18:20:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5407Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5501Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1057m 6% 4915Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 227m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1570m 9% 4455Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1609m 10% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14130Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14182Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 200m 1% 2042Mi 3% 18:20:21 DEBUG --- stderr --- 18:20:21 DEBUG 18:21:20 INFO 18:21:20 INFO [loop_until]: kubectl --namespace=xlou top pods 18:21:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:21:20 INFO [loop_until]: OK (rc = 0) 18:21:20 DEBUG --- stdout --- 18:21:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4449Mi am-55f77847b7-ngpns 10m 4451Mi am-55f77847b7-q6zcv 9m 4474Mi ds-cts-0 6m 410Mi ds-cts-1 7m 380Mi ds-cts-2 8m 374Mi ds-idrepo-0 1431m 13802Mi ds-idrepo-1 11m 13629Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1332m 3501Mi idm-65858d8c4c-8ff69 978m 3628Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 130m 523Mi 18:21:20 DEBUG --- stderr --- 18:21:20 DEBUG 18:21:21 INFO 18:21:21 INFO [loop_until]: kubectl --namespace=xlou top node 18:21:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:21:21 INFO [loop_until]: OK (rc = 0) 18:21:21 DEBUG --- stdout --- 18:21:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5411Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5502Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1018m 6% 4915Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 231m 1% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1515m 9% 4458Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1584m 9% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14131Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14184Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 187m 1% 2043Mi 3% 18:21:21 DEBUG --- stderr --- 18:21:21 DEBUG 18:22:20 INFO 18:22:20 INFO [loop_until]: kubectl --namespace=xlou top pods 18:22:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:22:20 INFO [loop_until]: OK (rc = 0) 18:22:20 DEBUG --- stdout --- 18:22:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4449Mi am-55f77847b7-ngpns 8m 4451Mi am-55f77847b7-q6zcv 8m 4474Mi ds-cts-0 6m 410Mi ds-cts-1 7m 381Mi ds-cts-2 7m 374Mi ds-idrepo-0 1144m 13807Mi ds-idrepo-1 11m 13629Mi ds-idrepo-2 10m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 883m 3503Mi idm-65858d8c4c-8ff69 712m 3629Mi lodemon-56989b88bb-nm2fw 6m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 116m 523Mi 18:22:20 DEBUG --- stderr --- 18:22:20 DEBUG 18:22:21 INFO 18:22:21 INFO [loop_until]: kubectl --namespace=xlou top node 18:22:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:22:21 INFO [loop_until]: OK (rc = 0) 18:22:21 DEBUG --- stdout --- 18:22:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5410Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5504Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 674m 4% 4910Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 194m 1% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 900m 5% 4458Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1073m 6% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14133Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14184Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 154m 0% 2043Mi 3% 18:22:21 DEBUG --- stderr --- 18:22:21 DEBUG 18:23:20 INFO 18:23:20 INFO [loop_until]: kubectl --namespace=xlou top pods 18:23:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:23:20 INFO [loop_until]: OK (rc = 0) 18:23:20 DEBUG --- stdout --- 18:23:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 10m 4449Mi am-55f77847b7-ngpns 9m 4451Mi am-55f77847b7-q6zcv 7m 4474Mi ds-cts-0 6m 410Mi ds-cts-1 7m 380Mi ds-cts-2 7m 374Mi ds-idrepo-0 12m 13807Mi ds-idrepo-1 11m 13636Mi ds-idrepo-2 9m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 3503Mi idm-65858d8c4c-8ff69 8m 3629Mi lodemon-56989b88bb-nm2fw 7m 66Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 102Mi 18:23:20 DEBUG --- stderr --- 18:23:20 DEBUG 18:23:21 INFO 18:23:21 INFO [loop_until]: kubectl --namespace=xlou top node 18:23:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:23:21 INFO [loop_until]: OK (rc = 0) 18:23:21 DEBUG --- stdout --- 18:23:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 5412Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5501Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4913Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 4458Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14360Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14132Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14192Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 71m 0% 1626Mi 2% 18:23:21 DEBUG --- stderr --- 18:23:21 DEBUG 127.0.0.1 - - [11/Aug/2023 18:23:54] "GET /monitoring/average?start_time=23-08-11_16:53:29&stop_time=23-08-11_17:21:53 HTTP/1.1" 200 - 18:24:20 INFO 18:24:20 INFO [loop_until]: kubectl --namespace=xlou top pods 18:24:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:24:21 INFO [loop_until]: OK (rc = 0) 18:24:21 DEBUG --- stdout --- 18:24:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4449Mi am-55f77847b7-ngpns 12m 4451Mi am-55f77847b7-q6zcv 7m 4474Mi ds-cts-0 6m 409Mi ds-cts-1 7m 381Mi ds-cts-2 7m 374Mi ds-idrepo-0 13m 13807Mi ds-idrepo-1 11m 13636Mi ds-idrepo-2 9m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 6m 3503Mi idm-65858d8c4c-8ff69 7m 3629Mi lodemon-56989b88bb-nm2fw 3m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 102Mi 18:24:21 DEBUG --- stderr --- 18:24:21 DEBUG 18:24:21 INFO 18:24:21 INFO [loop_until]: kubectl --namespace=xlou top node 18:24:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:24:22 INFO [loop_until]: OK (rc = 0) 18:24:22 DEBUG --- stdout --- 18:24:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5409Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 5504Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4913Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 4460Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14129Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14190Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1627Mi 2% 18:24:22 DEBUG --- stderr --- 18:24:22 DEBUG 18:25:21 INFO 18:25:21 INFO [loop_until]: kubectl --namespace=xlou top pods 18:25:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:25:21 INFO [loop_until]: OK (rc = 0) 18:25:21 DEBUG --- stdout --- 18:25:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4449Mi am-55f77847b7-ngpns 9m 4451Mi am-55f77847b7-q6zcv 7m 4477Mi ds-cts-0 17m 411Mi ds-cts-1 7m 381Mi ds-cts-2 7m 374Mi ds-idrepo-0 1815m 13812Mi ds-idrepo-1 11m 13636Mi ds-idrepo-2 13m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 1564m 3499Mi idm-65858d8c4c-8ff69 1110m 3648Mi lodemon-56989b88bb-nm2fw 4m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 396m 502Mi 18:25:21 DEBUG --- stderr --- 18:25:21 DEBUG 18:25:22 INFO 18:25:22 INFO [loop_until]: kubectl --namespace=xlou top node 18:25:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:25:22 INFO [loop_until]: OK (rc = 0) 18:25:22 DEBUG --- stdout --- 18:25:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5408Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5501Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1342m 8% 4933Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 255m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1655m 10% 4457Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 66m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1761m 11% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 14131Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14189Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 453m 2% 2020Mi 3% 18:25:22 DEBUG --- stderr --- 18:25:22 DEBUG 18:26:21 INFO 18:26:21 INFO [loop_until]: kubectl --namespace=xlou top pods 18:26:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:26:21 INFO [loop_until]: OK (rc = 0) 18:26:21 DEBUG --- stdout --- 18:26:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4449Mi am-55f77847b7-ngpns 10m 4451Mi am-55f77847b7-q6zcv 8m 4477Mi ds-cts-0 6m 411Mi ds-cts-1 7m 381Mi ds-cts-2 8m 374Mi ds-idrepo-0 2216m 13799Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 1948m 3503Mi idm-65858d8c4c-8ff69 1611m 3635Mi lodemon-56989b88bb-nm2fw 6m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 250m 499Mi 18:26:21 DEBUG --- stderr --- 18:26:21 DEBUG 18:26:22 INFO 18:26:22 INFO [loop_until]: kubectl --namespace=xlou top node 18:26:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:26:22 INFO [loop_until]: OK (rc = 0) 18:26:22 DEBUG --- stdout --- 18:26:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5409Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5503Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1638m 10% 4921Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 254m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1887m 11% 4458Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2224m 13% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14131Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14191Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 308m 1% 2018Mi 3% 18:26:22 DEBUG --- stderr --- 18:26:22 DEBUG 18:27:21 INFO 18:27:21 INFO [loop_until]: kubectl --namespace=xlou top pods 18:27:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:27:21 INFO [loop_until]: OK (rc = 0) 18:27:21 DEBUG --- stdout --- 18:27:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4450Mi am-55f77847b7-ngpns 9m 4451Mi am-55f77847b7-q6zcv 7m 4477Mi ds-cts-0 6m 411Mi ds-cts-1 7m 381Mi ds-cts-2 7m 374Mi ds-idrepo-0 2113m 13801Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 12m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1952m 3514Mi idm-65858d8c4c-8ff69 1508m 3637Mi lodemon-56989b88bb-nm2fw 6m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 323m 500Mi 18:27:21 DEBUG --- stderr --- 18:27:21 DEBUG 18:27:22 INFO 18:27:22 INFO [loop_until]: kubectl --namespace=xlou top node 18:27:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:27:22 INFO [loop_until]: OK (rc = 0) 18:27:22 DEBUG --- stdout --- 18:27:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5410Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5500Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1652m 10% 4921Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 272m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2137m 13% 4469Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2197m 13% 14357Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 71m 0% 14144Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14190Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 400m 2% 2025Mi 3% 18:27:22 DEBUG --- stderr --- 18:27:22 DEBUG 18:28:21 INFO 18:28:21 INFO [loop_until]: kubectl --namespace=xlou top pods 18:28:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:28:21 INFO [loop_until]: OK (rc = 0) 18:28:21 DEBUG --- stdout --- 18:28:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4453Mi am-55f77847b7-ngpns 11m 4452Mi am-55f77847b7-q6zcv 7m 4477Mi ds-cts-0 6m 411Mi ds-cts-1 7m 380Mi ds-cts-2 7m 374Mi ds-idrepo-0 2347m 13809Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1967m 3509Mi idm-65858d8c4c-8ff69 1655m 3644Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 258m 503Mi 18:28:21 DEBUG --- stderr --- 18:28:21 DEBUG 18:28:22 INFO 18:28:22 INFO [loop_until]: kubectl --namespace=xlou top node 18:28:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:28:22 INFO [loop_until]: OK (rc = 0) 18:28:22 DEBUG --- stdout --- 18:28:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5410Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5501Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1724m 10% 4931Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 262m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2037m 12% 4464Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2303m 14% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14132Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14191Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 317m 1% 2023Mi 3% 18:28:22 DEBUG --- stderr --- 18:28:22 DEBUG 18:29:21 INFO 18:29:21 INFO [loop_until]: kubectl --namespace=xlou top pods 18:29:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:29:21 INFO [loop_until]: OK (rc = 0) 18:29:21 DEBUG --- stdout --- 18:29:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4454Mi am-55f77847b7-ngpns 9m 4455Mi am-55f77847b7-q6zcv 8m 4477Mi ds-cts-0 6m 411Mi ds-cts-1 7m 381Mi ds-cts-2 8m 375Mi ds-idrepo-0 1994m 13802Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1745m 3512Mi idm-65858d8c4c-8ff69 1406m 3647Mi lodemon-56989b88bb-nm2fw 8m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 195m 505Mi 18:29:21 DEBUG --- stderr --- 18:29:21 DEBUG 18:29:22 INFO 18:29:22 INFO [loop_until]: kubectl --namespace=xlou top node 18:29:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:29:22 INFO [loop_until]: OK (rc = 0) 18:29:22 DEBUG --- stdout --- 18:29:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5411Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5506Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1416m 8% 4933Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 258m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1849m 11% 4469Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1990m 12% 14357Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14132Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14192Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 252m 1% 2022Mi 3% 18:29:22 DEBUG --- stderr --- 18:29:22 DEBUG 18:30:21 INFO 18:30:21 INFO [loop_until]: kubectl --namespace=xlou top pods 18:30:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:30:21 INFO [loop_until]: OK (rc = 0) 18:30:21 DEBUG --- stdout --- 18:30:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 10m 4454Mi am-55f77847b7-ngpns 8m 4455Mi am-55f77847b7-q6zcv 7m 4477Mi ds-cts-0 6m 411Mi ds-cts-1 6m 380Mi ds-cts-2 7m 374Mi ds-idrepo-0 2092m 13806Mi ds-idrepo-1 10m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2246m 3514Mi idm-65858d8c4c-8ff69 1535m 3651Mi lodemon-56989b88bb-nm2fw 6m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 178m 505Mi 18:30:21 DEBUG --- stderr --- 18:30:21 DEBUG 18:30:22 INFO 18:30:22 INFO [loop_until]: kubectl --namespace=xlou top node 18:30:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:30:22 INFO [loop_until]: OK (rc = 0) 18:30:22 DEBUG --- stdout --- 18:30:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5412Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5504Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1635m 10% 4937Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 268m 1% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2196m 13% 4467Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2186m 13% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 14133Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14191Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 249m 1% 2025Mi 3% 18:30:22 DEBUG --- stderr --- 18:30:22 DEBUG 18:31:21 INFO 18:31:21 INFO [loop_until]: kubectl --namespace=xlou top pods 18:31:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:31:21 INFO [loop_until]: OK (rc = 0) 18:31:21 DEBUG --- stdout --- 18:31:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4454Mi am-55f77847b7-ngpns 9m 4455Mi am-55f77847b7-q6zcv 8m 4477Mi ds-cts-0 6m 412Mi ds-cts-1 7m 381Mi ds-cts-2 7m 374Mi ds-idrepo-0 2208m 13804Mi ds-idrepo-1 11m 13634Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1860m 3517Mi idm-65858d8c4c-8ff69 1632m 3654Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 171m 505Mi 18:31:21 DEBUG --- stderr --- 18:31:21 DEBUG 18:31:22 INFO 18:31:22 INFO [loop_until]: kubectl --namespace=xlou top node 18:31:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:31:23 INFO [loop_until]: OK (rc = 0) 18:31:23 DEBUG --- stdout --- 18:31:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5414Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5507Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1770m 11% 4940Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 268m 1% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2079m 13% 4474Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2358m 14% 14356Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 14136Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14191Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 240m 1% 2026Mi 3% 18:31:23 DEBUG --- stderr --- 18:31:23 DEBUG 18:32:21 INFO 18:32:21 INFO [loop_until]: kubectl --namespace=xlou top pods 18:32:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:32:21 INFO [loop_until]: OK (rc = 0) 18:32:21 DEBUG --- stdout --- 18:32:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4454Mi am-55f77847b7-ngpns 10m 4455Mi am-55f77847b7-q6zcv 13m 4477Mi ds-cts-0 6m 411Mi ds-cts-1 7m 381Mi ds-cts-2 7m 374Mi ds-idrepo-0 2143m 13813Mi ds-idrepo-1 11m 13634Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1905m 3520Mi idm-65858d8c4c-8ff69 1651m 3656Mi lodemon-56989b88bb-nm2fw 10m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 182m 506Mi 18:32:21 DEBUG --- stderr --- 18:32:21 DEBUG 18:32:23 INFO 18:32:23 INFO [loop_until]: kubectl --namespace=xlou top node 18:32:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:32:23 INFO [loop_until]: OK (rc = 0) 18:32:23 DEBUG --- stdout --- 18:32:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5413Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5517Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1653m 10% 4941Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 271m 1% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2052m 12% 4478Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2223m 13% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14133Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14194Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 247m 1% 2022Mi 3% 18:32:23 DEBUG --- stderr --- 18:32:23 DEBUG 18:33:21 INFO 18:33:21 INFO [loop_until]: kubectl --namespace=xlou top pods 18:33:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:33:21 INFO [loop_until]: OK (rc = 0) 18:33:21 DEBUG --- stdout --- 18:33:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4454Mi am-55f77847b7-ngpns 9m 4455Mi am-55f77847b7-q6zcv 9m 4478Mi ds-cts-0 6m 411Mi ds-cts-1 7m 381Mi ds-cts-2 7m 375Mi ds-idrepo-0 2093m 13804Mi ds-idrepo-1 10m 13634Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1902m 3527Mi idm-65858d8c4c-8ff69 1479m 3666Mi lodemon-56989b88bb-nm2fw 6m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 164m 506Mi 18:33:21 DEBUG --- stderr --- 18:33:21 DEBUG 18:33:23 INFO 18:33:23 INFO [loop_until]: kubectl --namespace=xlou top node 18:33:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:33:23 INFO [loop_until]: OK (rc = 0) 18:33:23 DEBUG --- stdout --- 18:33:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 85m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5506Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1563m 9% 4952Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 262m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1979m 12% 4481Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2061m 12% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 14136Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14192Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 239m 1% 2025Mi 3% 18:33:23 DEBUG --- stderr --- 18:33:23 DEBUG 18:34:21 INFO 18:34:21 INFO [loop_until]: kubectl --namespace=xlou top pods 18:34:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:34:21 INFO [loop_until]: OK (rc = 0) 18:34:21 DEBUG --- stdout --- 18:34:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4454Mi am-55f77847b7-ngpns 8m 4455Mi am-55f77847b7-q6zcv 7m 4477Mi ds-cts-0 5m 411Mi ds-cts-1 7m 381Mi ds-cts-2 7m 378Mi ds-idrepo-0 2243m 13809Mi ds-idrepo-1 11m 13636Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2098m 3528Mi idm-65858d8c4c-8ff69 1486m 3654Mi lodemon-56989b88bb-nm2fw 8m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 168m 506Mi 18:34:21 DEBUG --- stderr --- 18:34:21 DEBUG 18:34:23 INFO 18:34:23 INFO [loop_until]: kubectl --namespace=xlou top node 18:34:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:34:23 INFO [loop_until]: OK (rc = 0) 18:34:23 DEBUG --- stdout --- 18:34:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 86m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5504Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1706m 10% 4940Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 266m 1% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2216m 13% 4487Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2427m 15% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14134Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14195Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 246m 1% 2023Mi 3% 18:34:23 DEBUG --- stderr --- 18:34:23 DEBUG 18:35:22 INFO 18:35:22 INFO [loop_until]: kubectl --namespace=xlou top pods 18:35:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:35:22 INFO [loop_until]: OK (rc = 0) 18:35:22 DEBUG --- stdout --- 18:35:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4454Mi am-55f77847b7-ngpns 8m 4455Mi am-55f77847b7-q6zcv 9m 4477Mi ds-cts-0 6m 412Mi ds-cts-1 8m 381Mi ds-cts-2 7m 378Mi ds-idrepo-0 2248m 13822Mi ds-idrepo-1 11m 13636Mi ds-idrepo-2 12m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2124m 3531Mi idm-65858d8c4c-8ff69 1477m 3656Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 169m 506Mi 18:35:22 DEBUG --- stderr --- 18:35:22 DEBUG 18:35:23 INFO 18:35:23 INFO [loop_until]: kubectl --namespace=xlou top node 18:35:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:35:23 INFO [loop_until]: OK (rc = 0) 18:35:23 DEBUG --- stdout --- 18:35:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1257Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5506Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1551m 9% 4943Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 258m 1% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2182m 13% 4487Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2214m 13% 14356Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 14133Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14197Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 238m 1% 2026Mi 3% 18:35:23 DEBUG --- stderr --- 18:35:23 DEBUG 18:36:22 INFO 18:36:22 INFO [loop_until]: kubectl --namespace=xlou top pods 18:36:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:36:22 INFO [loop_until]: OK (rc = 0) 18:36:22 DEBUG --- stdout --- 18:36:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 19m 4452Mi am-55f77847b7-ngpns 8m 4455Mi am-55f77847b7-q6zcv 8m 4478Mi ds-cts-0 6m 411Mi ds-cts-1 7m 381Mi ds-cts-2 7m 377Mi ds-idrepo-0 2034m 13812Mi ds-idrepo-1 11m 13634Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1782m 3534Mi idm-65858d8c4c-8ff69 1684m 3658Mi lodemon-56989b88bb-nm2fw 10m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 176m 507Mi 18:36:22 DEBUG --- stderr --- 18:36:22 DEBUG 18:36:23 INFO 18:36:23 INFO [loop_until]: kubectl --namespace=xlou top node 18:36:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:36:23 INFO [loop_until]: OK (rc = 0) 18:36:23 DEBUG --- stdout --- 18:36:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 85m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5506Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1782m 11% 4942Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 276m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1810m 11% 4490Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2221m 13% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14133Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14196Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 247m 1% 2025Mi 3% 18:36:23 DEBUG --- stderr --- 18:36:23 DEBUG 18:37:22 INFO 18:37:22 INFO [loop_until]: kubectl --namespace=xlou top pods 18:37:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:37:22 INFO [loop_until]: OK (rc = 0) 18:37:22 DEBUG --- stdout --- 18:37:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4454Mi am-55f77847b7-ngpns 8m 4455Mi am-55f77847b7-q6zcv 7m 4478Mi ds-cts-0 6m 411Mi ds-cts-1 7m 381Mi ds-cts-2 7m 377Mi ds-idrepo-0 2280m 13805Mi ds-idrepo-1 11m 13636Mi ds-idrepo-2 14m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2003m 3541Mi idm-65858d8c4c-8ff69 1731m 3677Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 192m 521Mi 18:37:22 DEBUG --- stderr --- 18:37:22 DEBUG 18:37:23 INFO 18:37:23 INFO [loop_until]: kubectl --namespace=xlou top node 18:37:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:37:23 INFO [loop_until]: OK (rc = 0) 18:37:23 DEBUG --- stdout --- 18:37:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5509Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1783m 11% 4959Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 282m 1% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2184m 13% 4497Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2443m 15% 14357Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14133Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14196Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 263m 1% 2042Mi 3% 18:37:23 DEBUG --- stderr --- 18:37:23 DEBUG 18:38:22 INFO 18:38:22 INFO [loop_until]: kubectl --namespace=xlou top pods 18:38:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:38:22 INFO [loop_until]: OK (rc = 0) 18:38:22 DEBUG --- stdout --- 18:38:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4454Mi am-55f77847b7-ngpns 9m 4455Mi am-55f77847b7-q6zcv 8m 4478Mi ds-cts-0 5m 411Mi ds-cts-1 7m 381Mi ds-cts-2 7m 378Mi ds-idrepo-0 2039m 13810Mi ds-idrepo-1 11m 13634Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1967m 3541Mi idm-65858d8c4c-8ff69 1362m 3662Mi lodemon-56989b88bb-nm2fw 10m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 169m 521Mi 18:38:22 DEBUG --- stderr --- 18:38:22 DEBUG 18:38:23 INFO 18:38:23 INFO [loop_until]: kubectl --namespace=xlou top node 18:38:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:38:23 INFO [loop_until]: OK (rc = 0) 18:38:23 DEBUG --- stdout --- 18:38:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1272Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5507Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1474m 9% 4943Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 269m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2101m 13% 4500Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2115m 13% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 14132Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14193Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 229m 1% 2039Mi 3% 18:38:23 DEBUG --- stderr --- 18:38:23 DEBUG 18:39:22 INFO 18:39:22 INFO [loop_until]: kubectl --namespace=xlou top pods 18:39:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:39:22 INFO [loop_until]: OK (rc = 0) 18:39:22 DEBUG --- stdout --- 18:39:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4454Mi am-55f77847b7-ngpns 8m 4455Mi am-55f77847b7-q6zcv 8m 4478Mi ds-cts-0 5m 411Mi ds-cts-1 7m 381Mi ds-cts-2 7m 373Mi ds-idrepo-0 2117m 13804Mi ds-idrepo-1 12m 13634Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 1845m 3544Mi idm-65858d8c4c-8ff69 1476m 3664Mi lodemon-56989b88bb-nm2fw 8m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 173m 522Mi 18:39:22 DEBUG --- stderr --- 18:39:22 DEBUG 18:39:23 INFO 18:39:23 INFO [loop_until]: kubectl --namespace=xlou top node 18:39:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:39:23 INFO [loop_until]: OK (rc = 0) 18:39:23 DEBUG --- stdout --- 18:39:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5512Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1576m 9% 4946Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 267m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1938m 12% 4500Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2060m 12% 14365Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14135Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14194Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 241m 1% 2041Mi 3% 18:39:23 DEBUG --- stderr --- 18:39:23 DEBUG 18:40:22 INFO 18:40:22 INFO [loop_until]: kubectl --namespace=xlou top pods 18:40:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:40:22 INFO [loop_until]: OK (rc = 0) 18:40:22 DEBUG --- stdout --- 18:40:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4454Mi am-55f77847b7-ngpns 8m 4456Mi am-55f77847b7-q6zcv 8m 4478Mi ds-cts-0 6m 411Mi ds-cts-1 7m 381Mi ds-cts-2 7m 372Mi ds-idrepo-0 2158m 13805Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1979m 3547Mi idm-65858d8c4c-8ff69 1427m 3669Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 165m 522Mi 18:40:22 DEBUG --- stderr --- 18:40:22 DEBUG 18:40:24 INFO 18:40:24 INFO [loop_until]: kubectl --namespace=xlou top node 18:40:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:40:24 INFO [loop_until]: OK (rc = 0) 18:40:24 DEBUG --- stdout --- 18:40:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 5507Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1519m 9% 4952Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 262m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2040m 12% 4501Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2087m 13% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14139Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14192Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 235m 1% 2039Mi 3% 18:40:24 DEBUG --- stderr --- 18:40:24 DEBUG 18:41:22 INFO 18:41:22 INFO [loop_until]: kubectl --namespace=xlou top pods 18:41:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:41:22 INFO [loop_until]: OK (rc = 0) 18:41:22 DEBUG --- stdout --- 18:41:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4454Mi am-55f77847b7-ngpns 8m 4456Mi am-55f77847b7-q6zcv 8m 4478Mi ds-cts-0 6m 411Mi ds-cts-1 7m 381Mi ds-cts-2 7m 372Mi ds-idrepo-0 2205m 13828Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1928m 3550Mi idm-65858d8c4c-8ff69 1485m 3671Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 166m 522Mi 18:41:22 DEBUG --- stderr --- 18:41:22 DEBUG 18:41:24 INFO 18:41:24 INFO [loop_until]: kubectl --namespace=xlou top node 18:41:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:41:24 INFO [loop_until]: OK (rc = 0) 18:41:24 DEBUG --- stdout --- 18:41:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5509Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1630m 10% 4951Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 267m 1% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2084m 13% 4505Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2288m 14% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 14139Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14192Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 237m 1% 2039Mi 3% 18:41:24 DEBUG --- stderr --- 18:41:24 DEBUG 18:42:22 INFO 18:42:22 INFO [loop_until]: kubectl --namespace=xlou top pods 18:42:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:42:22 INFO [loop_until]: OK (rc = 0) 18:42:22 DEBUG --- stdout --- 18:42:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4454Mi am-55f77847b7-ngpns 8m 4456Mi am-55f77847b7-q6zcv 7m 4478Mi ds-cts-0 6m 411Mi ds-cts-1 7m 382Mi ds-cts-2 7m 372Mi ds-idrepo-0 2184m 13822Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 13m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1962m 3552Mi idm-65858d8c4c-8ff69 1473m 3671Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 172m 522Mi 18:42:22 DEBUG --- stderr --- 18:42:22 DEBUG 18:42:24 INFO 18:42:24 INFO [loop_until]: kubectl --namespace=xlou top node 18:42:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:42:24 INFO [loop_until]: OK (rc = 0) 18:42:24 DEBUG --- stdout --- 18:42:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5505Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1492m 9% 4954Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 277m 1% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2142m 13% 4505Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2320m 14% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 14135Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14195Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 241m 1% 2043Mi 3% 18:42:24 DEBUG --- stderr --- 18:42:24 DEBUG 18:43:22 INFO 18:43:22 INFO [loop_until]: kubectl --namespace=xlou top pods 18:43:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:43:22 INFO [loop_until]: OK (rc = 0) 18:43:22 DEBUG --- stdout --- 18:43:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4454Mi am-55f77847b7-ngpns 7m 4456Mi am-55f77847b7-q6zcv 7m 4478Mi ds-cts-0 6m 411Mi ds-cts-1 5m 381Mi ds-cts-2 5m 372Mi ds-idrepo-0 2248m 13826Mi ds-idrepo-1 12m 13634Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1973m 3554Mi idm-65858d8c4c-8ff69 1608m 3679Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 171m 523Mi 18:43:22 DEBUG --- stderr --- 18:43:22 DEBUG 18:43:24 INFO 18:43:24 INFO [loop_until]: kubectl --namespace=xlou top node 18:43:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:43:24 INFO [loop_until]: OK (rc = 0) 18:43:24 DEBUG --- stdout --- 18:43:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5413Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5510Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1660m 10% 4961Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 270m 1% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2236m 14% 4508Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2329m 14% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14134Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14194Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 252m 1% 2055Mi 3% 18:43:24 DEBUG --- stderr --- 18:43:24 DEBUG 18:44:23 INFO 18:44:23 INFO [loop_until]: kubectl --namespace=xlou top pods 18:44:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:44:23 INFO [loop_until]: OK (rc = 0) 18:44:23 DEBUG --- stdout --- 18:44:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4454Mi am-55f77847b7-ngpns 8m 4456Mi am-55f77847b7-q6zcv 9m 4478Mi ds-cts-0 6m 412Mi ds-cts-1 5m 381Mi ds-cts-2 7m 383Mi ds-idrepo-0 2299m 13813Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2130m 3560Mi idm-65858d8c4c-8ff69 1490m 3680Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 175m 523Mi 18:44:23 DEBUG --- stderr --- 18:44:23 DEBUG 18:44:24 INFO 18:44:24 INFO [loop_until]: kubectl --namespace=xlou top node 18:44:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:44:24 INFO [loop_until]: OK (rc = 0) 18:44:24 DEBUG --- stdout --- 18:44:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5510Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1566m 9% 4965Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 270m 1% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2223m 13% 4514Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2186m 13% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14138Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14195Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 243m 1% 2040Mi 3% 18:44:24 DEBUG --- stderr --- 18:44:24 DEBUG 18:45:23 INFO 18:45:23 INFO [loop_until]: kubectl --namespace=xlou top pods 18:45:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:45:23 INFO [loop_until]: OK (rc = 0) 18:45:23 DEBUG --- stdout --- 18:45:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4454Mi am-55f77847b7-ngpns 8m 4456Mi am-55f77847b7-q6zcv 8m 4478Mi ds-cts-0 6m 411Mi ds-cts-1 6m 381Mi ds-cts-2 7m 372Mi ds-idrepo-0 2279m 13800Mi ds-idrepo-1 15m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 2105m 3555Mi idm-65858d8c4c-8ff69 1446m 3682Mi lodemon-56989b88bb-nm2fw 5m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 174m 523Mi 18:45:23 DEBUG --- stderr --- 18:45:23 DEBUG 18:45:24 INFO 18:45:24 INFO [loop_until]: kubectl --namespace=xlou top node 18:45:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:45:24 INFO [loop_until]: OK (rc = 0) 18:45:24 DEBUG --- stdout --- 18:45:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5414Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5506Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1572m 9% 4965Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 267m 1% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2239m 14% 4509Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2292m 14% 14365Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14140Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14195Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 242m 1% 2045Mi 3% 18:45:24 DEBUG --- stderr --- 18:45:24 DEBUG 18:46:23 INFO 18:46:23 INFO [loop_until]: kubectl --namespace=xlou top pods 18:46:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:46:23 INFO [loop_until]: OK (rc = 0) 18:46:23 DEBUG --- stdout --- 18:46:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4454Mi am-55f77847b7-ngpns 8m 4456Mi am-55f77847b7-q6zcv 8m 4478Mi ds-cts-0 6m 411Mi ds-cts-1 6m 381Mi ds-cts-2 6m 373Mi ds-idrepo-0 2036m 13808Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1714m 3563Mi idm-65858d8c4c-8ff69 1434m 3684Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 162m 523Mi 18:46:23 DEBUG --- stderr --- 18:46:23 DEBUG 18:46:24 INFO 18:46:24 INFO [loop_until]: kubectl --namespace=xlou top node 18:46:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:46:24 INFO [loop_until]: OK (rc = 0) 18:46:24 DEBUG --- stdout --- 18:46:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5506Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1555m 9% 4967Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 256m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1860m 11% 4517Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2057m 12% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 14137Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14196Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 232m 1% 2046Mi 3% 18:46:24 DEBUG --- stderr --- 18:46:24 DEBUG 18:47:23 INFO 18:47:23 INFO [loop_until]: kubectl --namespace=xlou top pods 18:47:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:47:23 INFO [loop_until]: OK (rc = 0) 18:47:23 DEBUG --- stdout --- 18:47:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4455Mi am-55f77847b7-ngpns 8m 4456Mi am-55f77847b7-q6zcv 7m 4478Mi ds-cts-0 6m 411Mi ds-cts-1 7m 381Mi ds-cts-2 8m 373Mi ds-idrepo-0 2261m 13808Mi ds-idrepo-1 22m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1936m 3561Mi idm-65858d8c4c-8ff69 1401m 3686Mi lodemon-56989b88bb-nm2fw 6m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 175m 523Mi 18:47:23 DEBUG --- stderr --- 18:47:23 DEBUG 18:47:24 INFO 18:47:24 INFO [loop_until]: kubectl --namespace=xlou top node 18:47:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:47:24 INFO [loop_until]: OK (rc = 0) 18:47:24 DEBUG --- stdout --- 18:47:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5504Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1595m 10% 4971Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 268m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2091m 13% 4527Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2277m 14% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14140Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 71m 0% 14200Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 240m 1% 2046Mi 3% 18:47:24 DEBUG --- stderr --- 18:47:24 DEBUG 18:48:23 INFO 18:48:23 INFO [loop_until]: kubectl --namespace=xlou top pods 18:48:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:48:23 INFO [loop_until]: OK (rc = 0) 18:48:23 DEBUG --- stdout --- 18:48:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4455Mi am-55f77847b7-ngpns 8m 4456Mi am-55f77847b7-q6zcv 8m 4478Mi ds-cts-0 6m 411Mi ds-cts-1 7m 381Mi ds-cts-2 10m 375Mi ds-idrepo-0 2043m 13823Mi ds-idrepo-1 13m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1892m 3563Mi idm-65858d8c4c-8ff69 1390m 3688Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 161m 524Mi 18:48:23 DEBUG --- stderr --- 18:48:23 DEBUG 18:48:24 INFO 18:48:24 INFO [loop_until]: kubectl --namespace=xlou top node 18:48:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:48:25 INFO [loop_until]: OK (rc = 0) 18:48:25 DEBUG --- stdout --- 18:48:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5506Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1451m 9% 4972Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 255m 1% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1908m 12% 4516Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2059m 12% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 14138Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 69m 0% 14198Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 227m 1% 2047Mi 3% 18:48:25 DEBUG --- stderr --- 18:48:25 DEBUG 18:49:23 INFO 18:49:23 INFO [loop_until]: kubectl --namespace=xlou top pods 18:49:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:49:23 INFO [loop_until]: OK (rc = 0) 18:49:23 DEBUG --- stdout --- 18:49:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4457Mi am-55f77847b7-ngpns 8m 4456Mi am-55f77847b7-q6zcv 8m 4478Mi ds-cts-0 6m 411Mi ds-cts-1 7m 381Mi ds-cts-2 7m 376Mi ds-idrepo-0 2312m 13822Mi ds-idrepo-1 22m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2044m 3565Mi idm-65858d8c4c-8ff69 1555m 3690Mi lodemon-56989b88bb-nm2fw 5m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 175m 524Mi 18:49:23 DEBUG --- stderr --- 18:49:23 DEBUG 18:49:25 INFO 18:49:25 INFO [loop_until]: kubectl --namespace=xlou top node 18:49:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:49:25 INFO [loop_until]: OK (rc = 0) 18:49:25 DEBUG --- stdout --- 18:49:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 85m 0% 1257Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5510Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1741m 10% 4973Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 269m 1% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1998m 12% 4521Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2388m 15% 14383Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14137Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 77m 0% 14202Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 252m 1% 2048Mi 3% 18:49:25 DEBUG --- stderr --- 18:49:25 DEBUG 18:50:23 INFO 18:50:23 INFO [loop_until]: kubectl --namespace=xlou top pods 18:50:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:50:23 INFO [loop_until]: OK (rc = 0) 18:50:23 DEBUG --- stdout --- 18:50:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4457Mi am-55f77847b7-ngpns 8m 4457Mi am-55f77847b7-q6zcv 7m 4478Mi ds-cts-0 6m 412Mi ds-cts-1 6m 381Mi ds-cts-2 8m 375Mi ds-idrepo-0 2047m 13807Mi ds-idrepo-1 14m 13636Mi ds-idrepo-2 15m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1973m 3586Mi idm-65858d8c4c-8ff69 1263m 3694Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 163m 524Mi 18:50:23 DEBUG --- stderr --- 18:50:23 DEBUG 18:50:25 INFO 18:50:25 INFO [loop_until]: kubectl --namespace=xlou top node 18:50:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:50:25 INFO [loop_until]: OK (rc = 0) 18:50:25 DEBUG --- stdout --- 18:50:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5510Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1340m 8% 4974Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 263m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2045m 12% 4542Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2120m 13% 14365Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 71m 0% 14139Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 69m 0% 14202Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 233m 1% 2043Mi 3% 18:50:25 DEBUG --- stderr --- 18:50:25 DEBUG 18:51:23 INFO 18:51:23 INFO [loop_until]: kubectl --namespace=xlou top pods 18:51:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:51:23 INFO [loop_until]: OK (rc = 0) 18:51:23 DEBUG --- stdout --- 18:51:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4457Mi am-55f77847b7-ngpns 8m 4457Mi am-55f77847b7-q6zcv 9m 4478Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 8m 375Mi ds-idrepo-0 2138m 13826Mi ds-idrepo-1 16m 13636Mi ds-idrepo-2 13m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1818m 3571Mi idm-65858d8c4c-8ff69 1588m 3689Mi lodemon-56989b88bb-nm2fw 6m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 168m 524Mi 18:51:23 DEBUG --- stderr --- 18:51:23 DEBUG 18:51:25 INFO 18:51:25 INFO [loop_until]: kubectl --namespace=xlou top node 18:51:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:51:25 INFO [loop_until]: OK (rc = 0) 18:51:25 DEBUG --- stdout --- 18:51:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1257Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5509Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1554m 9% 4966Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 256m 1% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1983m 12% 4530Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2207m 13% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 74m 0% 14140Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 14203Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 236m 1% 2045Mi 3% 18:51:25 DEBUG --- stderr --- 18:51:25 DEBUG 18:52:23 INFO 18:52:23 INFO [loop_until]: kubectl --namespace=xlou top pods 18:52:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:52:23 INFO [loop_until]: OK (rc = 0) 18:52:23 DEBUG --- stdout --- 18:52:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4457Mi am-55f77847b7-ngpns 7m 4457Mi am-55f77847b7-q6zcv 8m 4478Mi ds-cts-0 6m 413Mi ds-cts-1 7m 382Mi ds-cts-2 7m 375Mi ds-idrepo-0 2216m 13823Mi ds-idrepo-1 14m 13635Mi ds-idrepo-2 18m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2120m 3575Mi idm-65858d8c4c-8ff69 1499m 3691Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 177m 525Mi 18:52:23 DEBUG --- stderr --- 18:52:23 DEBUG 18:52:25 INFO 18:52:25 INFO [loop_until]: kubectl --namespace=xlou top node 18:52:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:52:25 INFO [loop_until]: OK (rc = 0) 18:52:25 DEBUG --- stdout --- 18:52:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5419Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5552Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5508Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1572m 9% 4973Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 272m 1% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2266m 14% 4532Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2295m 14% 14360Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 71m 0% 14141Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 14199Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 235m 1% 2044Mi 3% 18:52:25 DEBUG --- stderr --- 18:52:25 DEBUG 18:53:23 INFO 18:53:23 INFO [loop_until]: kubectl --namespace=xlou top pods 18:53:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:53:24 INFO [loop_until]: OK (rc = 0) 18:53:24 DEBUG --- stdout --- 18:53:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4457Mi am-55f77847b7-ngpns 8m 4457Mi am-55f77847b7-q6zcv 8m 4478Mi ds-cts-0 6m 413Mi ds-cts-1 7m 382Mi ds-cts-2 7m 376Mi ds-idrepo-0 2170m 13823Mi ds-idrepo-1 18m 13635Mi ds-idrepo-2 15m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 1948m 3587Mi idm-65858d8c4c-8ff69 1307m 3692Mi lodemon-56989b88bb-nm2fw 8m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 165m 525Mi 18:53:24 DEBUG --- stderr --- 18:53:24 DEBUG 18:53:25 INFO 18:53:25 INFO [loop_until]: kubectl --namespace=xlou top node 18:53:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:53:25 INFO [loop_until]: OK (rc = 0) 18:53:25 DEBUG --- stdout --- 18:53:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1259Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5419Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5562Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5508Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1386m 8% 4976Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 258m 1% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2133m 13% 4539Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2203m 13% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 69m 0% 14140Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 73m 0% 14201Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 238m 1% 2044Mi 3% 18:53:25 DEBUG --- stderr --- 18:53:25 DEBUG 18:54:24 INFO 18:54:24 INFO [loop_until]: kubectl --namespace=xlou top pods 18:54:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:54:24 INFO [loop_until]: OK (rc = 0) 18:54:24 DEBUG --- stdout --- 18:54:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4457Mi am-55f77847b7-ngpns 8m 4457Mi am-55f77847b7-q6zcv 11m 4478Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 7m 375Mi ds-idrepo-0 1953m 13822Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1828m 3577Mi idm-65858d8c4c-8ff69 1472m 3699Mi lodemon-56989b88bb-nm2fw 6m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 175m 525Mi 18:54:24 DEBUG --- stderr --- 18:54:24 DEBUG 18:54:25 INFO 18:54:25 INFO [loop_until]: kubectl --namespace=xlou top node 18:54:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:54:25 INFO [loop_until]: OK (rc = 0) 18:54:25 DEBUG --- stdout --- 18:54:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5554Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5507Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1625m 10% 4978Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 262m 1% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1903m 11% 4531Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2098m 13% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14143Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 14207Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 239m 1% 2044Mi 3% 18:54:25 DEBUG --- stderr --- 18:54:25 DEBUG 18:55:24 INFO 18:55:24 INFO [loop_until]: kubectl --namespace=xlou top pods 18:55:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:55:24 INFO [loop_until]: OK (rc = 0) 18:55:24 DEBUG --- stdout --- 18:55:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4457Mi am-55f77847b7-ngpns 8m 4457Mi am-55f77847b7-q6zcv 7m 4478Mi ds-cts-0 5m 412Mi ds-cts-1 7m 382Mi ds-cts-2 7m 375Mi ds-idrepo-0 14m 13822Mi ds-idrepo-1 12m 13636Mi ds-idrepo-2 16m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 3577Mi idm-65858d8c4c-8ff69 7m 3698Mi lodemon-56989b88bb-nm2fw 6m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 26m 103Mi 18:55:24 DEBUG --- stderr --- 18:55:24 DEBUG 18:55:25 INFO 18:55:25 INFO [loop_until]: kubectl --namespace=xlou top node 18:55:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:55:25 INFO [loop_until]: OK (rc = 0) 18:55:25 DEBUG --- stdout --- 18:55:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5508Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4983Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 4530Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14383Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 14138Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14200Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1632Mi 2% 18:55:25 DEBUG --- stderr --- 18:55:25 DEBUG 127.0.0.1 - - [11/Aug/2023 18:56:13] "GET /monitoring/average?start_time=23-08-11_17:25:54&stop_time=23-08-11_17:54:13 HTTP/1.1" 200 - 18:56:24 INFO 18:56:24 INFO [loop_until]: kubectl --namespace=xlou top pods 18:56:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:56:24 INFO [loop_until]: OK (rc = 0) 18:56:24 DEBUG --- stdout --- 18:56:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4457Mi am-55f77847b7-ngpns 8m 4457Mi am-55f77847b7-q6zcv 9m 4478Mi ds-cts-0 6m 413Mi ds-cts-1 7m 382Mi ds-cts-2 5m 375Mi ds-idrepo-0 10m 13822Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 16m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 3577Mi idm-65858d8c4c-8ff69 6m 3698Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 103Mi 18:56:24 DEBUG --- stderr --- 18:56:24 DEBUG 18:56:25 INFO 18:56:25 INFO [loop_until]: kubectl --namespace=xlou top node 18:56:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:56:25 INFO [loop_until]: OK (rc = 0) 18:56:25 DEBUG --- stdout --- 18:56:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5553Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5510Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 4981Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 65m 0% 4531Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 14141Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 14200Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 584m 3% 1908Mi 3% 18:56:25 DEBUG --- stderr --- 18:56:25 DEBUG 18:57:24 INFO 18:57:24 INFO [loop_until]: kubectl --namespace=xlou top pods 18:57:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:57:24 INFO [loop_until]: OK (rc = 0) 18:57:24 DEBUG --- stdout --- 18:57:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 11m 4457Mi am-55f77847b7-ngpns 8m 4457Mi am-55f77847b7-q6zcv 8m 4478Mi ds-cts-0 8m 412Mi ds-cts-1 9m 382Mi ds-cts-2 7m 375Mi ds-idrepo-0 1176m 13823Mi ds-idrepo-1 13m 13636Mi ds-idrepo-2 10m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 484m 3580Mi idm-65858d8c4c-8ff69 343m 3704Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 793m 484Mi 18:57:24 DEBUG --- stderr --- 18:57:24 DEBUG 18:57:26 INFO 18:57:26 INFO [loop_until]: kubectl --namespace=xlou top node 18:57:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:57:26 INFO [loop_until]: OK (rc = 0) 18:57:26 DEBUG --- stdout --- 18:57:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5506Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1840m 11% 4980Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 282m 1% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2592m 16% 4534Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2842m 17% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14139Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14200Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 436m 2% 2009Mi 3% 18:57:26 DEBUG --- stderr --- 18:57:26 DEBUG 18:58:24 INFO 18:58:24 INFO [loop_until]: kubectl --namespace=xlou top pods 18:58:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:24 INFO [loop_until]: OK (rc = 0) 18:58:24 DEBUG --- stdout --- 18:58:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4457Mi am-55f77847b7-ngpns 8m 4457Mi am-55f77847b7-q6zcv 8m 4479Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 375Mi ds-idrepo-0 2928m 13822Mi ds-idrepo-1 12m 13636Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2443m 3584Mi idm-65858d8c4c-8ff69 1954m 3707Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 305m 489Mi 18:58:24 DEBUG --- stderr --- 18:58:24 DEBUG 18:58:26 INFO 18:58:26 INFO [loop_until]: kubectl --namespace=xlou top node 18:58:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:26 INFO [loop_until]: OK (rc = 0) 18:58:26 DEBUG --- stdout --- 18:58:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5555Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5509Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2051m 12% 4987Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 304m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2541m 15% 4540Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3028m 19% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14139Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14200Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 326m 2% 2022Mi 3% 18:58:26 DEBUG --- stderr --- 18:58:26 DEBUG 18:59:24 INFO 18:59:24 INFO [loop_until]: kubectl --namespace=xlou top pods 18:59:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:59:24 INFO [loop_until]: OK (rc = 0) 18:59:24 DEBUG --- stdout --- 18:59:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4457Mi am-55f77847b7-ngpns 8m 4457Mi am-55f77847b7-q6zcv 8m 4486Mi ds-cts-0 6m 412Mi ds-cts-1 7m 383Mi ds-cts-2 6m 375Mi ds-idrepo-0 2954m 13810Mi ds-idrepo-1 13m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2564m 3587Mi idm-65858d8c4c-8ff69 2093m 3723Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 297m 504Mi 18:59:24 DEBUG --- stderr --- 18:59:24 DEBUG 18:59:26 INFO 18:59:26 INFO [loop_until]: kubectl --namespace=xlou top node 18:59:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:59:26 INFO [loop_until]: OK (rc = 0) 18:59:26 DEBUG --- stdout --- 18:59:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5563Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5510Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2165m 13% 4991Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 303m 1% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2652m 16% 4539Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3136m 19% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14140Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14201Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 326m 2% 2031Mi 3% 18:59:26 DEBUG --- stderr --- 18:59:26 DEBUG 19:00:24 INFO 19:00:24 INFO [loop_until]: kubectl --namespace=xlou top pods 19:00:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:00:24 INFO [loop_until]: OK (rc = 0) 19:00:24 DEBUG --- stdout --- 19:00:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4457Mi am-55f77847b7-ngpns 8m 4457Mi am-55f77847b7-q6zcv 8m 4496Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 375Mi ds-idrepo-0 2915m 13823Mi ds-idrepo-1 15m 13636Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2279m 3590Mi idm-65858d8c4c-8ff69 1883m 3707Mi lodemon-56989b88bb-nm2fw 6m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 221m 510Mi 19:00:24 DEBUG --- stderr --- 19:00:24 DEBUG 19:00:26 INFO 19:00:26 INFO [loop_until]: kubectl --namespace=xlou top node 19:00:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:00:26 INFO [loop_until]: OK (rc = 0) 19:00:26 DEBUG --- stdout --- 19:00:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5572Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5509Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2080m 13% 4990Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 309m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2734m 17% 4546Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2751m 17% 14375Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14143Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14201Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 284m 1% 2032Mi 3% 19:00:26 DEBUG --- stderr --- 19:00:26 DEBUG 19:01:24 INFO 19:01:24 INFO [loop_until]: kubectl --namespace=xlou top pods 19:01:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:01:24 INFO [loop_until]: OK (rc = 0) 19:01:24 DEBUG --- stdout --- 19:01:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4457Mi am-55f77847b7-ngpns 8m 4457Mi am-55f77847b7-q6zcv 9m 4505Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 375Mi ds-idrepo-0 2776m 13823Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2417m 3595Mi idm-65858d8c4c-8ff69 1864m 3710Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 204m 510Mi 19:01:24 DEBUG --- stderr --- 19:01:24 DEBUG 19:01:26 INFO 19:01:26 INFO [loop_until]: kubectl --namespace=xlou top node 19:01:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:01:26 INFO [loop_until]: OK (rc = 0) 19:01:26 DEBUG --- stdout --- 19:01:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5419Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5580Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 5508Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1753m 11% 4991Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 302m 1% 2126Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2800m 17% 4551Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2997m 18% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14141Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14204Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 278m 1% 2030Mi 3% 19:01:26 DEBUG --- stderr --- 19:01:26 DEBUG 19:02:24 INFO 19:02:24 INFO [loop_until]: kubectl --namespace=xlou top pods 19:02:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:02:24 INFO [loop_until]: OK (rc = 0) 19:02:24 DEBUG --- stdout --- 19:02:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4457Mi am-55f77847b7-ngpns 9m 4457Mi am-55f77847b7-q6zcv 8m 4517Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 375Mi ds-idrepo-0 2797m 13812Mi ds-idrepo-1 9m 13635Mi ds-idrepo-2 16m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 2480m 3597Mi idm-65858d8c4c-8ff69 2074m 3717Mi lodemon-56989b88bb-nm2fw 6m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 204m 511Mi 19:02:24 DEBUG --- stderr --- 19:02:24 DEBUG 19:02:26 INFO 19:02:26 INFO [loop_until]: kubectl --namespace=xlou top node 19:02:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:02:26 INFO [loop_until]: OK (rc = 0) 19:02:26 DEBUG --- stdout --- 19:02:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5589Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5509Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1812m 11% 5001Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 292m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2557m 16% 4549Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2730m 17% 14367Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 14143Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14200Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 271m 1% 2032Mi 3% 19:02:26 DEBUG --- stderr --- 19:02:26 DEBUG 19:03:24 INFO 19:03:24 INFO [loop_until]: kubectl --namespace=xlou top pods 19:03:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:03:24 INFO [loop_until]: OK (rc = 0) 19:03:24 DEBUG --- stdout --- 19:03:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4457Mi am-55f77847b7-ngpns 7m 4458Mi am-55f77847b7-q6zcv 20m 4639Mi ds-cts-0 12m 412Mi ds-cts-1 7m 382Mi ds-cts-2 5m 375Mi ds-idrepo-0 2886m 13823Mi ds-idrepo-1 9m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 2419m 3600Mi idm-65858d8c4c-8ff69 2025m 3719Mi lodemon-56989b88bb-nm2fw 8m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 212m 511Mi 19:03:24 DEBUG --- stderr --- 19:03:24 DEBUG 19:03:26 INFO 19:03:26 INFO [loop_until]: kubectl --namespace=xlou top node 19:03:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:03:26 INFO [loop_until]: OK (rc = 0) 19:03:26 DEBUG --- stdout --- 19:03:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5710Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5512Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2192m 13% 5012Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 302m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2362m 14% 4551Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 70m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2912m 18% 14367Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14139Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14200Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 278m 1% 2032Mi 3% 19:03:26 DEBUG --- stderr --- 19:03:26 DEBUG 19:04:25 INFO 19:04:25 INFO [loop_until]: kubectl --namespace=xlou top pods 19:04:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:04:25 INFO [loop_until]: OK (rc = 0) 19:04:25 DEBUG --- stdout --- 19:04:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4457Mi am-55f77847b7-ngpns 8m 4458Mi am-55f77847b7-q6zcv 4m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 12m 372Mi ds-idrepo-0 2711m 13802Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2444m 3602Mi idm-65858d8c4c-8ff69 1992m 3717Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 209m 511Mi 19:04:25 DEBUG --- stderr --- 19:04:25 DEBUG 19:04:26 INFO 19:04:26 INFO [loop_until]: kubectl --namespace=xlou top node 19:04:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:04:26 INFO [loop_until]: OK (rc = 0) 19:04:26 DEBUG --- stdout --- 19:04:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5421Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5510Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2046m 12% 4999Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 300m 1% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2553m 16% 4558Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2899m 18% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 14144Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14201Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 288m 1% 2034Mi 3% 19:04:26 DEBUG --- stderr --- 19:04:26 DEBUG 19:05:25 INFO 19:05:25 INFO [loop_until]: kubectl --namespace=xlou top pods 19:05:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:05:25 INFO [loop_until]: OK (rc = 0) 19:05:25 DEBUG --- stdout --- 19:05:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4457Mi am-55f77847b7-ngpns 9m 4458Mi am-55f77847b7-q6zcv 10m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 7m 384Mi ds-cts-2 6m 373Mi ds-idrepo-0 2864m 13802Mi ds-idrepo-1 9m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2667m 3605Mi idm-65858d8c4c-8ff69 1881m 3718Mi lodemon-56989b88bb-nm2fw 8m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 233m 512Mi 19:05:25 DEBUG --- stderr --- 19:05:25 DEBUG 19:05:26 INFO 19:05:26 INFO [loop_until]: kubectl --namespace=xlou top node 19:05:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:05:26 INFO [loop_until]: OK (rc = 0) 19:05:26 DEBUG --- stdout --- 19:05:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5419Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5709Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 5510Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2000m 12% 5003Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 304m 1% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2644m 16% 4562Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 51m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2764m 17% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14142Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14202Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 293m 1% 2034Mi 3% 19:05:26 DEBUG --- stderr --- 19:05:26 DEBUG 19:06:25 INFO 19:06:25 INFO [loop_until]: kubectl --namespace=xlou top pods 19:06:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:06:25 INFO [loop_until]: OK (rc = 0) 19:06:25 DEBUG --- stdout --- 19:06:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4457Mi am-55f77847b7-ngpns 9m 4458Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 373Mi ds-idrepo-0 2639m 13800Mi ds-idrepo-1 9m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2599m 3627Mi idm-65858d8c4c-8ff69 1899m 3722Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 251m 545Mi 19:06:25 DEBUG --- stderr --- 19:06:25 DEBUG 19:06:27 INFO 19:06:27 INFO [loop_until]: kubectl --namespace=xlou top node 19:06:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:06:27 INFO [loop_until]: OK (rc = 0) 19:06:27 DEBUG --- stdout --- 19:06:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5422Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5710Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5509Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2153m 13% 5003Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 298m 1% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2660m 16% 4565Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2831m 17% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14140Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 56m 0% 14204Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 311m 1% 2065Mi 3% 19:06:27 DEBUG --- stderr --- 19:06:27 DEBUG 19:07:25 INFO 19:07:25 INFO [loop_until]: kubectl --namespace=xlou top pods 19:07:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:07:25 INFO [loop_until]: OK (rc = 0) 19:07:25 DEBUG --- stdout --- 19:07:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 10m 4457Mi am-55f77847b7-ngpns 10m 4458Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 374Mi ds-idrepo-0 2847m 13805Mi ds-idrepo-1 9m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 2654m 3613Mi idm-65858d8c4c-8ff69 1924m 3725Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 243m 546Mi 19:07:25 DEBUG --- stderr --- 19:07:25 DEBUG 19:07:27 INFO 19:07:27 INFO [loop_until]: kubectl --namespace=xlou top node 19:07:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:07:27 INFO [loop_until]: OK (rc = 0) 19:07:27 DEBUG --- stdout --- 19:07:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5418Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5708Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5510Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2148m 13% 5006Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 307m 1% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2774m 17% 4567Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3011m 18% 14363Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 14152Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14208Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 293m 1% 2067Mi 3% 19:07:27 DEBUG --- stderr --- 19:07:27 DEBUG 19:08:25 INFO 19:08:25 INFO [loop_until]: kubectl --namespace=xlou top pods 19:08:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:08:25 INFO [loop_until]: OK (rc = 0) 19:08:25 DEBUG --- stdout --- 19:08:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4457Mi am-55f77847b7-ngpns 8m 4458Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 374Mi ds-idrepo-0 2507m 13805Mi ds-idrepo-1 10m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2342m 3614Mi idm-65858d8c4c-8ff69 1930m 3727Mi lodemon-56989b88bb-nm2fw 5m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 198m 546Mi 19:08:25 DEBUG --- stderr --- 19:08:25 DEBUG 19:08:27 INFO 19:08:27 INFO [loop_until]: kubectl --namespace=xlou top node 19:08:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:08:27 INFO [loop_until]: OK (rc = 0) 19:08:27 DEBUG --- stdout --- 19:08:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5419Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5707Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5510Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2156m 13% 5008Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 295m 1% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2561m 16% 4571Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2820m 17% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14143Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14203Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 259m 1% 2068Mi 3% 19:08:27 DEBUG --- stderr --- 19:08:27 DEBUG 19:09:25 INFO 19:09:25 INFO [loop_until]: kubectl --namespace=xlou top pods 19:09:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:09:25 INFO [loop_until]: OK (rc = 0) 19:09:25 DEBUG --- stdout --- 19:09:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4457Mi am-55f77847b7-ngpns 9m 4458Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 374Mi ds-idrepo-0 2831m 13803Mi ds-idrepo-1 8m 13634Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2455m 3618Mi idm-65858d8c4c-8ff69 1852m 3730Mi lodemon-56989b88bb-nm2fw 5m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 205m 546Mi 19:09:25 DEBUG --- stderr --- 19:09:25 DEBUG 19:09:27 INFO 19:09:27 INFO [loop_until]: kubectl --namespace=xlou top node 19:09:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:09:27 INFO [loop_until]: OK (rc = 0) 19:09:27 DEBUG --- stdout --- 19:09:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1269Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5708Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 5519Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2125m 13% 5011Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 306m 1% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2524m 15% 4573Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2805m 17% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14144Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14202Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 279m 1% 2070Mi 3% 19:09:27 DEBUG --- stderr --- 19:09:27 DEBUG 19:10:25 INFO 19:10:25 INFO [loop_until]: kubectl --namespace=xlou top pods 19:10:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:10:25 INFO [loop_until]: OK (rc = 0) 19:10:25 DEBUG --- stdout --- 19:10:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 10m 4457Mi am-55f77847b7-ngpns 11m 4458Mi am-55f77847b7-q6zcv 10m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 5m 374Mi ds-idrepo-0 2707m 13796Mi ds-idrepo-1 9m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2652m 3622Mi idm-65858d8c4c-8ff69 2004m 3733Mi lodemon-56989b88bb-nm2fw 7m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 209m 546Mi 19:10:25 DEBUG --- stderr --- 19:10:25 DEBUG 19:10:27 INFO 19:10:27 INFO [loop_until]: kubectl --namespace=xlou top node 19:10:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:10:27 INFO [loop_until]: OK (rc = 0) 19:10:27 DEBUG --- stdout --- 19:10:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5418Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5705Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5509Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2085m 13% 5012Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 300m 1% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2810m 17% 4575Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2840m 17% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14144Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14204Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 273m 1% 2069Mi 3% 19:10:27 DEBUG --- stderr --- 19:10:27 DEBUG 19:11:25 INFO 19:11:25 INFO [loop_until]: kubectl --namespace=xlou top pods 19:11:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:11:25 INFO [loop_until]: OK (rc = 0) 19:11:25 DEBUG --- stdout --- 19:11:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4457Mi am-55f77847b7-ngpns 10m 4458Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 374Mi ds-idrepo-0 2952m 13801Mi ds-idrepo-1 9m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2933m 3626Mi idm-65858d8c4c-8ff69 1971m 3738Mi lodemon-56989b88bb-nm2fw 5m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 214m 547Mi 19:11:25 DEBUG --- stderr --- 19:11:25 DEBUG 19:11:27 INFO 19:11:27 INFO [loop_until]: kubectl --namespace=xlou top node 19:11:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:11:27 INFO [loop_until]: OK (rc = 0) 19:11:27 DEBUG --- stdout --- 19:11:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5710Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5510Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2162m 13% 5018Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 305m 1% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2374m 14% 4578Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2788m 17% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14140Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14202Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 276m 1% 2069Mi 3% 19:11:27 DEBUG --- stderr --- 19:11:27 DEBUG 19:12:25 INFO 19:12:25 INFO [loop_until]: kubectl --namespace=xlou top pods 19:12:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:12:25 INFO [loop_until]: OK (rc = 0) 19:12:25 DEBUG --- stdout --- 19:12:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4457Mi am-55f77847b7-ngpns 8m 4458Mi am-55f77847b7-q6zcv 4m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 374Mi ds-idrepo-0 2691m 13799Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 2330m 3630Mi idm-65858d8c4c-8ff69 2250m 3742Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 201m 547Mi 19:12:25 DEBUG --- stderr --- 19:12:25 DEBUG 19:12:27 INFO 19:12:27 INFO [loop_until]: kubectl --namespace=xlou top node 19:12:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:12:27 INFO [loop_until]: OK (rc = 0) 19:12:27 DEBUG --- stdout --- 19:12:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5709Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5511Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2197m 13% 5020Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 312m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2557m 16% 4581Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2723m 17% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14142Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14206Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 278m 1% 2069Mi 3% 19:12:27 DEBUG --- stderr --- 19:12:27 DEBUG 19:13:25 INFO 19:13:25 INFO [loop_until]: kubectl --namespace=xlou top pods 19:13:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:13:25 INFO [loop_until]: OK (rc = 0) 19:13:25 DEBUG --- stdout --- 19:13:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4457Mi am-55f77847b7-ngpns 8m 4458Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 374Mi ds-idrepo-0 2848m 13824Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2296m 3633Mi idm-65858d8c4c-8ff69 2084m 3744Mi lodemon-56989b88bb-nm2fw 1m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 207m 547Mi 19:13:25 DEBUG --- stderr --- 19:13:25 DEBUG 19:13:27 INFO 19:13:27 INFO [loop_until]: kubectl --namespace=xlou top node 19:13:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:13:28 INFO [loop_until]: OK (rc = 0) 19:13:28 DEBUG --- stdout --- 19:13:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5710Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5513Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2132m 13% 5024Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 314m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2324m 14% 4586Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2848m 17% 14390Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14141Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14205Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 272m 1% 2070Mi 3% 19:13:28 DEBUG --- stderr --- 19:13:28 DEBUG 19:14:25 INFO 19:14:25 INFO [loop_until]: kubectl --namespace=xlou top pods 19:14:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:14:26 INFO [loop_until]: OK (rc = 0) 19:14:26 DEBUG --- stdout --- 19:14:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4457Mi am-55f77847b7-ngpns 8m 4458Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 7m 383Mi ds-cts-2 5m 374Mi ds-idrepo-0 2538m 13804Mi ds-idrepo-1 9m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2696m 3637Mi idm-65858d8c4c-8ff69 1919m 3749Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 203m 547Mi 19:14:26 DEBUG --- stderr --- 19:14:26 DEBUG 19:14:28 INFO 19:14:28 INFO [loop_until]: kubectl --namespace=xlou top node 19:14:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:14:28 INFO [loop_until]: OK (rc = 0) 19:14:28 DEBUG --- stdout --- 19:14:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5709Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5512Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1854m 11% 5027Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 298m 1% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2389m 15% 4590Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2646m 16% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 14146Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 56m 0% 14205Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 271m 1% 2071Mi 3% 19:14:28 DEBUG --- stderr --- 19:14:28 DEBUG 19:15:26 INFO 19:15:26 INFO [loop_until]: kubectl --namespace=xlou top pods 19:15:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:15:26 INFO [loop_until]: OK (rc = 0) 19:15:26 DEBUG --- stdout --- 19:15:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4457Mi am-55f77847b7-ngpns 8m 4458Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 412Mi ds-cts-1 8m 382Mi ds-cts-2 6m 374Mi ds-idrepo-0 2887m 13823Mi ds-idrepo-1 10m 13636Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2775m 3640Mi idm-65858d8c4c-8ff69 2107m 3751Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 224m 544Mi 19:15:26 DEBUG --- stderr --- 19:15:26 DEBUG 19:15:28 INFO 19:15:28 INFO [loop_until]: kubectl --namespace=xlou top node 19:15:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:15:28 INFO [loop_until]: OK (rc = 0) 19:15:28 DEBUG --- stdout --- 19:15:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5415Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5707Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5508Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2239m 14% 5027Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 309m 1% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2679m 16% 4593Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2966m 18% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14141Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14206Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 287m 1% 2066Mi 3% 19:15:28 DEBUG --- stderr --- 19:15:28 DEBUG 19:16:26 INFO 19:16:26 INFO [loop_until]: kubectl --namespace=xlou top pods 19:16:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:16:26 INFO [loop_until]: OK (rc = 0) 19:16:26 DEBUG --- stdout --- 19:16:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4458Mi am-55f77847b7-ngpns 9m 4458Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 412Mi ds-cts-1 8m 382Mi ds-cts-2 6m 374Mi ds-idrepo-0 3103m 13802Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 14m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2798m 3644Mi idm-65858d8c4c-8ff69 2309m 3753Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 220m 544Mi 19:16:26 DEBUG --- stderr --- 19:16:26 DEBUG 19:16:28 INFO 19:16:28 INFO [loop_until]: kubectl --namespace=xlou top node 19:16:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:16:28 INFO [loop_until]: OK (rc = 0) 19:16:28 DEBUG --- stdout --- 19:16:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5706Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5512Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2374m 14% 5031Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 320m 2% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2712m 17% 4592Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3105m 19% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14144Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14207Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 296m 1% 2078Mi 3% 19:16:28 DEBUG --- stderr --- 19:16:28 DEBUG 19:17:26 INFO 19:17:26 INFO [loop_until]: kubectl --namespace=xlou top pods 19:17:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:17:26 INFO [loop_until]: OK (rc = 0) 19:17:26 DEBUG --- stdout --- 19:17:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4457Mi am-55f77847b7-ngpns 9m 4459Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 6m 382Mi ds-cts-2 6m 374Mi ds-idrepo-0 2820m 13827Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2655m 3646Mi idm-65858d8c4c-8ff69 1946m 3756Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 208m 544Mi 19:17:26 DEBUG --- stderr --- 19:17:26 DEBUG 19:17:28 INFO 19:17:28 INFO [loop_until]: kubectl --namespace=xlou top node 19:17:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:17:28 INFO [loop_until]: OK (rc = 0) 19:17:28 DEBUG --- stdout --- 19:17:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5708Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5511Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1988m 12% 5034Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 291m 1% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2747m 17% 4597Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2654m 16% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14142Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 14210Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 273m 1% 2068Mi 3% 19:17:28 DEBUG --- stderr --- 19:17:28 DEBUG 19:18:26 INFO 19:18:26 INFO [loop_until]: kubectl --namespace=xlou top pods 19:18:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:18:26 INFO [loop_until]: OK (rc = 0) 19:18:26 DEBUG --- stdout --- 19:18:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4457Mi am-55f77847b7-ngpns 10m 4460Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 374Mi ds-idrepo-0 2778m 13802Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2530m 3651Mi idm-65858d8c4c-8ff69 2129m 3759Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 206m 545Mi 19:18:26 DEBUG --- stderr --- 19:18:26 DEBUG 19:18:28 INFO 19:18:28 INFO [loop_until]: kubectl --namespace=xlou top node 19:18:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:18:28 INFO [loop_until]: OK (rc = 0) 19:18:28 DEBUG --- stdout --- 19:18:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5512Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2192m 13% 5031Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 301m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2419m 15% 4601Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2779m 17% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14143Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14207Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 269m 1% 2065Mi 3% 19:18:28 DEBUG --- stderr --- 19:18:28 DEBUG 19:19:26 INFO 19:19:26 INFO [loop_until]: kubectl --namespace=xlou top pods 19:19:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:19:26 INFO [loop_until]: OK (rc = 0) 19:19:26 DEBUG --- stdout --- 19:19:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4458Mi am-55f77847b7-ngpns 8m 4460Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 374Mi ds-idrepo-0 2716m 13800Mi ds-idrepo-1 8m 13636Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2484m 3655Mi idm-65858d8c4c-8ff69 1993m 3760Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 205m 545Mi 19:19:26 DEBUG --- stderr --- 19:19:26 DEBUG 19:19:28 INFO 19:19:28 INFO [loop_until]: kubectl --namespace=xlou top node 19:19:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:19:28 INFO [loop_until]: OK (rc = 0) 19:19:28 DEBUG --- stdout --- 19:19:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5713Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5510Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2025m 12% 5037Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 291m 1% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2541m 15% 4605Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2804m 17% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14145Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 56m 0% 14210Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 273m 1% 2069Mi 3% 19:19:28 DEBUG --- stderr --- 19:19:28 DEBUG 19:20:26 INFO 19:20:26 INFO [loop_until]: kubectl --namespace=xlou top pods 19:20:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:20:26 INFO [loop_until]: OK (rc = 0) 19:20:26 DEBUG --- stdout --- 19:20:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4458Mi am-55f77847b7-ngpns 8m 4460Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 375Mi ds-idrepo-0 2712m 13823Mi ds-idrepo-1 8m 13636Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2345m 3659Mi idm-65858d8c4c-8ff69 1966m 3764Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 204m 545Mi 19:20:26 DEBUG --- stderr --- 19:20:26 DEBUG 19:20:28 INFO 19:20:28 INFO [loop_until]: kubectl --namespace=xlou top node 19:20:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:20:28 INFO [loop_until]: OK (rc = 0) 19:20:28 DEBUG --- stdout --- 19:20:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1267Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5418Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5710Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5511Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1970m 12% 5037Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 303m 1% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2382m 14% 4609Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2539m 15% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14146Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14211Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 274m 1% 2068Mi 3% 19:20:28 DEBUG --- stderr --- 19:20:28 DEBUG 19:21:26 INFO 19:21:26 INFO [loop_until]: kubectl --namespace=xlou top pods 19:21:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:21:26 INFO [loop_until]: OK (rc = 0) 19:21:26 DEBUG --- stdout --- 19:21:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4458Mi am-55f77847b7-ngpns 8m 4460Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 412Mi ds-cts-1 5m 382Mi ds-cts-2 6m 375Mi ds-idrepo-0 2767m 13825Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2414m 3661Mi idm-65858d8c4c-8ff69 2231m 3766Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 210m 545Mi 19:21:26 DEBUG --- stderr --- 19:21:26 DEBUG 19:21:28 INFO 19:21:28 INFO [loop_until]: kubectl --namespace=xlou top node 19:21:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:21:28 INFO [loop_until]: OK (rc = 0) 19:21:28 DEBUG --- stdout --- 19:21:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5418Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5709Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5509Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2371m 14% 5044Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 313m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2647m 16% 4610Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3048m 19% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14146Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14206Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 278m 1% 2070Mi 3% 19:21:28 DEBUG --- stderr --- 19:21:28 DEBUG 19:22:26 INFO 19:22:26 INFO [loop_until]: kubectl --namespace=xlou top pods 19:22:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:22:26 INFO [loop_until]: OK (rc = 0) 19:22:26 DEBUG --- stdout --- 19:22:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 10m 4458Mi am-55f77847b7-ngpns 7m 4460Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 7m 382Mi ds-cts-2 6m 374Mi ds-idrepo-0 2741m 13808Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2585m 3664Mi idm-65858d8c4c-8ff69 1972m 3769Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 206m 545Mi 19:22:26 DEBUG --- stderr --- 19:22:26 DEBUG 19:22:29 INFO 19:22:29 INFO [loop_until]: kubectl --namespace=xlou top node 19:22:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:22:29 INFO [loop_until]: OK (rc = 0) 19:22:29 DEBUG --- stdout --- 19:22:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5420Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5710Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5510Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2022m 12% 5046Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 299m 1% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2536m 15% 4615Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2815m 17% 14394Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 14148Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14206Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 274m 1% 2070Mi 3% 19:22:29 DEBUG --- stderr --- 19:22:29 DEBUG 19:23:26 INFO 19:23:26 INFO [loop_until]: kubectl --namespace=xlou top pods 19:23:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:23:26 INFO [loop_until]: OK (rc = 0) 19:23:26 DEBUG --- stdout --- 19:23:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4458Mi am-55f77847b7-ngpns 8m 4460Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 6m 413Mi ds-cts-1 7m 382Mi ds-cts-2 5m 374Mi ds-idrepo-0 2624m 13824Mi ds-idrepo-1 8m 13636Mi ds-idrepo-2 26m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2459m 3667Mi idm-65858d8c4c-8ff69 1820m 3770Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 187m 545Mi 19:23:26 DEBUG --- stderr --- 19:23:26 DEBUG 19:23:29 INFO 19:23:29 INFO [loop_until]: kubectl --namespace=xlou top node 19:23:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:23:29 INFO [loop_until]: OK (rc = 0) 19:23:29 DEBUG --- stdout --- 19:23:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5708Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5513Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1766m 11% 5049Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 290m 1% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2554m 16% 4619Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2543m 16% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 75m 0% 14144Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14210Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 260m 1% 2069Mi 3% 19:23:29 DEBUG --- stderr --- 19:23:29 DEBUG 19:24:27 INFO 19:24:27 INFO [loop_until]: kubectl --namespace=xlou top pods 19:24:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:24:27 INFO [loop_until]: OK (rc = 0) 19:24:27 DEBUG --- stdout --- 19:24:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4458Mi am-55f77847b7-ngpns 8m 4460Mi am-55f77847b7-q6zcv 15m 4639Mi ds-cts-0 6m 412Mi ds-cts-1 12m 382Mi ds-cts-2 6m 375Mi ds-idrepo-0 2714m 13801Mi ds-idrepo-1 9m 13635Mi ds-idrepo-2 13m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 2546m 3670Mi idm-65858d8c4c-8ff69 1847m 3773Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 202m 546Mi 19:24:27 DEBUG --- stderr --- 19:24:27 DEBUG 19:24:29 INFO 19:24:29 INFO [loop_until]: kubectl --namespace=xlou top node 19:24:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:24:29 INFO [loop_until]: OK (rc = 0) 19:24:29 DEBUG --- stdout --- 19:24:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5419Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 78m 0% 5713Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5515Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1823m 11% 5048Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 293m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2598m 16% 4623Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2736m 17% 14377Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14144Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14207Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 272m 1% 2072Mi 3% 19:24:29 DEBUG --- stderr --- 19:24:29 DEBUG 19:25:27 INFO 19:25:27 INFO [loop_until]: kubectl --namespace=xlou top pods 19:25:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:25:27 INFO [loop_until]: OK (rc = 0) 19:25:27 DEBUG --- stdout --- 19:25:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4458Mi am-55f77847b7-ngpns 8m 4460Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 414Mi ds-cts-1 7m 382Mi ds-cts-2 5m 373Mi ds-idrepo-0 2769m 13810Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2539m 3689Mi idm-65858d8c4c-8ff69 2180m 3775Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 218m 546Mi 19:25:27 DEBUG --- stderr --- 19:25:27 DEBUG 19:25:29 INFO 19:25:29 INFO [loop_until]: kubectl --namespace=xlou top node 19:25:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:25:29 INFO [loop_until]: OK (rc = 0) 19:25:29 DEBUG --- stdout --- 19:25:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5419Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5515Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2170m 13% 5054Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 310m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2605m 16% 4639Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2945m 18% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14146Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 56m 0% 14210Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 289m 1% 2070Mi 3% 19:25:29 DEBUG --- stderr --- 19:25:29 DEBUG 19:26:27 INFO 19:26:27 INFO [loop_until]: kubectl --namespace=xlou top pods 19:26:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:26:27 INFO [loop_until]: OK (rc = 0) 19:26:27 DEBUG --- stdout --- 19:26:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4463Mi am-55f77847b7-ngpns 8m 4460Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 414Mi ds-cts-1 6m 382Mi ds-cts-2 6m 373Mi ds-idrepo-0 2520m 13828Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 20m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2367m 3674Mi idm-65858d8c4c-8ff69 1774m 3777Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 195m 547Mi 19:26:27 DEBUG --- stderr --- 19:26:27 DEBUG 19:26:29 INFO 19:26:29 INFO [loop_until]: kubectl --namespace=xlou top node 19:26:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:26:29 INFO [loop_until]: OK (rc = 0) 19:26:29 DEBUG --- stdout --- 19:26:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5421Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5514Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1781m 11% 5052Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 282m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2478m 15% 4630Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2365m 14% 14402Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14150Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 52m 0% 14210Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 252m 1% 2072Mi 3% 19:26:29 DEBUG --- stderr --- 19:26:29 DEBUG 19:27:27 INFO 19:27:27 INFO [loop_until]: kubectl --namespace=xlou top pods 19:27:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:27:27 INFO [loop_until]: OK (rc = 0) 19:27:27 DEBUG --- stdout --- 19:27:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4472Mi am-55f77847b7-ngpns 9m 4460Mi am-55f77847b7-q6zcv 8m 4639Mi ds-cts-0 6m 413Mi ds-cts-1 7m 382Mi ds-cts-2 6m 373Mi ds-idrepo-0 10m 13828Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 10m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 7m 3674Mi idm-65858d8c4c-8ff69 7m 3776Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 20m 104Mi 19:27:27 DEBUG --- stderr --- 19:27:27 DEBUG 19:27:29 INFO 19:27:29 INFO [loop_until]: kubectl --namespace=xlou top node 19:27:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:27:29 INFO [loop_until]: OK (rc = 0) 19:27:29 DEBUG --- stdout --- 19:27:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5432Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5512Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 5056Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 4624Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 14144Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 53m 0% 14211Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 71m 0% 1637Mi 2% 19:27:29 DEBUG --- stderr --- 19:27:29 DEBUG 19:28:27 INFO 19:28:27 INFO [loop_until]: kubectl --namespace=xlou top pods 19:28:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:28:27 INFO [loop_until]: OK (rc = 0) 19:28:27 DEBUG --- stdout --- 19:28:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 11m 4480Mi am-55f77847b7-ngpns 12m 4465Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 413Mi ds-cts-1 8m 382Mi ds-cts-2 6m 374Mi ds-idrepo-0 10m 13828Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 7m 3674Mi idm-65858d8c4c-8ff69 7m 3776Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 104Mi 19:28:27 DEBUG --- stderr --- 19:28:27 DEBUG 19:28:29 INFO 19:28:29 INFO [loop_until]: kubectl --namespace=xlou top node 19:28:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:28:29 INFO [loop_until]: OK (rc = 0) 19:28:29 DEBUG --- stdout --- 19:28:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5519Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 5056Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 4626Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 57m 0% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 14142Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 54m 0% 14210Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1635Mi 2% 19:28:29 DEBUG --- stderr --- 19:28:29 DEBUG 127.0.0.1 - - [11/Aug/2023 19:28:39] "GET /monitoring/average?start_time=23-08-11_17:58:13&stop_time=23-08-11_18:26:38 HTTP/1.1" 200 - 19:29:27 INFO 19:29:27 INFO [loop_until]: kubectl --namespace=xlou top pods 19:29:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:29:27 INFO [loop_until]: OK (rc = 0) 19:29:27 DEBUG --- stdout --- 19:29:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 10m 4491Mi am-55f77847b7-ngpns 14m 4477Mi am-55f77847b7-q6zcv 8m 4639Mi ds-cts-0 6m 413Mi ds-cts-1 9m 383Mi ds-cts-2 6m 374Mi ds-idrepo-0 1208m 13809Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 1220m 3678Mi idm-65858d8c4c-8ff69 1108m 3780Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 464m 509Mi 19:29:27 DEBUG --- stderr --- 19:29:27 DEBUG 19:29:29 INFO 19:29:29 INFO [loop_until]: kubectl --namespace=xlou top node 19:29:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:29:29 INFO [loop_until]: OK (rc = 0) 19:29:29 DEBUG --- stdout --- 19:29:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5452Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5723Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 78m 0% 5528Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1404m 8% 5054Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 232m 1% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1445m 9% 4630Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2331m 14% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 14148Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 54m 0% 14212Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 594m 3% 2027Mi 3% 19:29:29 DEBUG --- stderr --- 19:29:29 DEBUG 19:30:27 INFO 19:30:27 INFO [loop_until]: kubectl --namespace=xlou top pods 19:30:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:30:27 INFO [loop_until]: OK (rc = 0) 19:30:27 DEBUG --- stdout --- 19:30:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 28m 4604Mi am-55f77847b7-ngpns 9m 4486Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 7m 383Mi ds-cts-2 6m 373Mi ds-idrepo-0 3691m 13823Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3198m 3683Mi idm-65858d8c4c-8ff69 2365m 3784Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 465m 504Mi 19:30:27 DEBUG --- stderr --- 19:30:27 DEBUG 19:30:30 INFO 19:30:30 INFO [loop_until]: kubectl --namespace=xlou top node 19:30:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:30:30 INFO [loop_until]: OK (rc = 0) 19:30:30 DEBUG --- stdout --- 19:30:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 79m 0% 5566Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5538Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2382m 14% 5058Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 326m 2% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3246m 20% 4633Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3664m 23% 14383Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14146Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 52m 0% 14213Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 486m 3% 2028Mi 3% 19:30:30 DEBUG --- stderr --- 19:30:30 DEBUG 19:31:27 INFO 19:31:27 INFO [loop_until]: kubectl --namespace=xlou top pods 19:31:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:31:27 INFO [loop_until]: OK (rc = 0) 19:31:27 DEBUG --- stdout --- 19:31:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4604Mi am-55f77847b7-ngpns 10m 4497Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 413Mi ds-cts-1 7m 383Mi ds-cts-2 5m 373Mi ds-idrepo-0 3694m 13818Mi ds-idrepo-1 9m 13635Mi ds-idrepo-2 12m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3394m 3687Mi idm-65858d8c4c-8ff69 2422m 3789Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 342m 509Mi 19:31:27 DEBUG --- stderr --- 19:31:27 DEBUG 19:31:30 INFO 19:31:30 INFO [loop_until]: kubectl --namespace=xlou top node 19:31:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:31:30 INFO [loop_until]: OK (rc = 0) 19:31:30 DEBUG --- stdout --- 19:31:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 5565Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5711Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2515m 15% 5066Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 341m 2% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3360m 21% 4634Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3880m 24% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14147Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 53m 0% 14215Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 400m 2% 2032Mi 3% 19:31:30 DEBUG --- stderr --- 19:31:30 DEBUG 19:32:27 INFO 19:32:27 INFO [loop_until]: kubectl --namespace=xlou top pods 19:32:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:32:27 INFO [loop_until]: OK (rc = 0) 19:32:27 DEBUG --- stdout --- 19:32:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4604Mi am-55f77847b7-ngpns 9m 4505Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 6m 383Mi ds-cts-2 6m 373Mi ds-idrepo-0 3702m 13823Mi ds-idrepo-1 10m 13636Mi ds-idrepo-2 14m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3279m 3690Mi idm-65858d8c4c-8ff69 2486m 3790Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 256m 509Mi 19:32:27 DEBUG --- stderr --- 19:32:27 DEBUG 19:32:30 INFO 19:32:30 INFO [loop_until]: kubectl --namespace=xlou top node 19:32:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:32:30 INFO [loop_until]: OK (rc = 0) 19:32:30 DEBUG --- stdout --- 19:32:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5565Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5559Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2508m 15% 5068Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 336m 2% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3293m 20% 4640Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3882m 24% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14146Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14215Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 334m 2% 2035Mi 3% 19:32:30 DEBUG --- stderr --- 19:32:30 DEBUG 19:33:27 INFO 19:33:27 INFO [loop_until]: kubectl --namespace=xlou top pods 19:33:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:33:27 INFO [loop_until]: OK (rc = 0) 19:33:27 DEBUG --- stdout --- 19:33:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4604Mi am-55f77847b7-ngpns 16m 4627Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 8m 383Mi ds-cts-2 5m 374Mi ds-idrepo-0 3832m 13801Mi ds-idrepo-1 8m 13636Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3444m 3696Mi idm-65858d8c4c-8ff69 2486m 3793Mi lodemon-56989b88bb-nm2fw 1m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 264m 510Mi 19:33:27 DEBUG --- stderr --- 19:33:27 DEBUG 19:33:30 INFO 19:33:30 INFO [loop_until]: kubectl --namespace=xlou top node 19:33:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:33:30 INFO [loop_until]: OK (rc = 0) 19:33:30 DEBUG --- stdout --- 19:33:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5566Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5710Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 79m 0% 5678Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2768m 17% 5070Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 345m 2% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3446m 21% 4643Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3999m 25% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14145Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14225Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 342m 2% 2035Mi 3% 19:33:30 DEBUG --- stderr --- 19:33:30 DEBUG 19:34:28 INFO 19:34:28 INFO [loop_until]: kubectl --namespace=xlou top pods 19:34:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:34:28 INFO [loop_until]: OK (rc = 0) 19:34:28 DEBUG --- stdout --- 19:34:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4604Mi am-55f77847b7-ngpns 8m 4628Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 7m 383Mi ds-cts-2 6m 373Mi ds-idrepo-0 2941m 13822Mi ds-idrepo-1 10m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2598m 3699Mi idm-65858d8c4c-8ff69 2310m 3797Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 231m 542Mi 19:34:28 DEBUG --- stderr --- 19:34:28 DEBUG 19:34:30 INFO 19:34:30 INFO [loop_until]: kubectl --namespace=xlou top node 19:34:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:34:30 INFO [loop_until]: OK (rc = 0) 19:34:30 DEBUG --- stdout --- 19:34:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5564Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5711Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5678Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2459m 15% 5073Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 331m 2% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2521m 15% 4644Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3207m 20% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14145Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14215Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 298m 1% 2066Mi 3% 19:34:30 DEBUG --- stderr --- 19:34:30 DEBUG 19:35:28 INFO 19:35:28 INFO [loop_until]: kubectl --namespace=xlou top pods 19:35:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:35:28 INFO [loop_until]: OK (rc = 0) 19:35:28 DEBUG --- stdout --- 19:35:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4604Mi am-55f77847b7-ngpns 6m 4627Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 4m 383Mi ds-cts-2 5m 375Mi ds-idrepo-0 3592m 13801Mi ds-idrepo-1 9m 13636Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3289m 3703Mi idm-65858d8c4c-8ff69 2584m 3804Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 246m 542Mi 19:35:28 DEBUG --- stderr --- 19:35:28 DEBUG 19:35:30 INFO 19:35:30 INFO [loop_until]: kubectl --namespace=xlou top node 19:35:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:35:30 INFO [loop_until]: OK (rc = 0) 19:35:30 DEBUG --- stdout --- 19:35:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1267Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 60m 0% 5563Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5710Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5679Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2648m 16% 5083Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 328m 2% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3304m 20% 4653Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 47m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3599m 22% 14375Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14146Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 56m 0% 14214Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 315m 1% 2067Mi 3% 19:35:30 DEBUG --- stderr --- 19:35:30 DEBUG 19:36:28 INFO 19:36:28 INFO [loop_until]: kubectl --namespace=xlou top pods 19:36:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:36:28 INFO [loop_until]: OK (rc = 0) 19:36:28 DEBUG --- stdout --- 19:36:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 20m 4608Mi am-55f77847b7-ngpns 6m 4627Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 4m 382Mi ds-cts-2 5m 373Mi ds-idrepo-0 3309m 13803Mi ds-idrepo-1 10m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3184m 3706Mi idm-65858d8c4c-8ff69 2564m 3805Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 241m 543Mi 19:36:28 DEBUG --- stderr --- 19:36:28 DEBUG 19:36:30 INFO 19:36:30 INFO [loop_until]: kubectl --namespace=xlou top node 19:36:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:36:30 INFO [loop_until]: OK (rc = 0) 19:36:30 DEBUG --- stdout --- 19:36:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 84m 0% 5568Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5709Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2240m 14% 5084Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 334m 2% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3300m 20% 4657Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 50m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3416m 21% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14147Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14213Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 315m 1% 2068Mi 3% 19:36:30 DEBUG --- stderr --- 19:36:30 DEBUG 19:37:28 INFO 19:37:28 INFO [loop_until]: kubectl --namespace=xlou top pods 19:37:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:37:28 INFO [loop_until]: OK (rc = 0) 19:37:28 DEBUG --- stdout --- 19:37:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 6m 4627Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 5m 383Mi ds-cts-2 6m 373Mi ds-idrepo-0 3748m 13799Mi ds-idrepo-1 11m 13636Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3541m 3731Mi idm-65858d8c4c-8ff69 2589m 3825Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 267m 544Mi 19:37:28 DEBUG --- stderr --- 19:37:28 DEBUG 19:37:30 INFO 19:37:30 INFO [loop_until]: kubectl --namespace=xlou top node 19:37:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:37:30 INFO [loop_until]: OK (rc = 0) 19:37:30 DEBUG --- stdout --- 19:37:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5566Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5680Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2549m 16% 5099Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 341m 2% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3760m 23% 4681Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 51m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3797m 23% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14144Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14214Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 329m 2% 2067Mi 3% 19:37:30 DEBUG --- stderr --- 19:37:30 DEBUG 19:38:28 INFO 19:38:28 INFO [loop_until]: kubectl --namespace=xlou top pods 19:38:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:38:28 INFO [loop_until]: OK (rc = 0) 19:38:28 DEBUG --- stdout --- 19:38:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 12m 4627Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 4m 383Mi ds-cts-2 6m 373Mi ds-idrepo-0 3299m 13796Mi ds-idrepo-1 10m 13636Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3266m 3714Mi idm-65858d8c4c-8ff69 2304m 3809Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 236m 544Mi 19:38:28 DEBUG --- stderr --- 19:38:28 DEBUG 19:38:31 INFO 19:38:31 INFO [loop_until]: kubectl --namespace=xlou top node 19:38:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:38:31 INFO [loop_until]: OK (rc = 0) 19:38:31 DEBUG --- stdout --- 19:38:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5568Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5711Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 5680Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2580m 16% 5085Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 340m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3279m 20% 4657Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 49m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3525m 22% 14397Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14144Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14215Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 302m 1% 2068Mi 3% 19:38:31 DEBUG --- stderr --- 19:38:31 DEBUG 19:39:28 INFO 19:39:28 INFO [loop_until]: kubectl --namespace=xlou top pods 19:39:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:39:28 INFO [loop_until]: OK (rc = 0) 19:39:28 DEBUG --- stdout --- 19:39:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 7m 4627Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 4m 383Mi ds-cts-2 5m 373Mi ds-idrepo-0 3390m 13825Mi ds-idrepo-1 11m 13636Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3244m 3716Mi idm-65858d8c4c-8ff69 2570m 3819Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 255m 544Mi 19:39:28 DEBUG --- stderr --- 19:39:28 DEBUG 19:39:31 INFO 19:39:31 INFO [loop_until]: kubectl --namespace=xlou top node 19:39:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:39:31 INFO [loop_until]: OK (rc = 0) 19:39:31 DEBUG --- stdout --- 19:39:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5566Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2704m 17% 5097Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 329m 2% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3306m 20% 4661Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 48m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3552m 22% 14377Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14148Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14215Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 321m 2% 2069Mi 3% 19:39:31 DEBUG --- stderr --- 19:39:31 DEBUG 19:40:28 INFO 19:40:28 INFO [loop_until]: kubectl --namespace=xlou top pods 19:40:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:40:28 INFO [loop_until]: OK (rc = 0) 19:40:28 DEBUG --- stdout --- 19:40:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 15m 4608Mi am-55f77847b7-ngpns 6m 4627Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 4m 383Mi ds-cts-2 6m 374Mi ds-idrepo-0 3284m 13803Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2993m 3718Mi idm-65858d8c4c-8ff69 2326m 3813Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 233m 543Mi 19:40:28 DEBUG --- stderr --- 19:40:28 DEBUG 19:40:31 INFO 19:40:31 INFO [loop_until]: kubectl --namespace=xlou top node 19:40:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:40:31 INFO [loop_until]: OK (rc = 0) 19:40:31 DEBUG --- stdout --- 19:40:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5568Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5709Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2388m 15% 5085Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 334m 2% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3255m 20% 4666Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 50m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3283m 20% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14150Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14215Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 304m 1% 2068Mi 3% 19:40:31 DEBUG --- stderr --- 19:40:31 DEBUG 19:41:28 INFO 19:41:28 INFO [loop_until]: kubectl --namespace=xlou top pods 19:41:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:41:28 INFO [loop_until]: OK (rc = 0) 19:41:28 DEBUG --- stdout --- 19:41:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 6m 4627Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 6m 413Mi ds-cts-1 4m 383Mi ds-cts-2 6m 373Mi ds-idrepo-0 3212m 13809Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3065m 3723Mi idm-65858d8c4c-8ff69 2261m 3816Mi lodemon-56989b88bb-nm2fw 1m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 238m 544Mi 19:41:28 DEBUG --- stderr --- 19:41:28 DEBUG 19:41:31 INFO 19:41:31 INFO [loop_until]: kubectl --namespace=xlou top node 19:41:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:41:31 INFO [loop_until]: OK (rc = 0) 19:41:31 DEBUG --- stdout --- 19:41:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1275Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5569Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5678Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2418m 15% 5093Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 329m 2% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3058m 19% 4667Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3363m 21% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14150Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14214Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 296m 1% 2070Mi 3% 19:41:31 DEBUG --- stderr --- 19:41:31 DEBUG 19:42:28 INFO 19:42:28 INFO [loop_until]: kubectl --namespace=xlou top pods 19:42:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:42:28 INFO [loop_until]: OK (rc = 0) 19:42:28 DEBUG --- stdout --- 19:42:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 6m 4628Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 5m 383Mi ds-cts-2 5m 373Mi ds-idrepo-0 3130m 13823Mi ds-idrepo-1 11m 13636Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2910m 3726Mi idm-65858d8c4c-8ff69 2276m 3819Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 234m 544Mi 19:42:28 DEBUG --- stderr --- 19:42:28 DEBUG 19:42:31 INFO 19:42:31 INFO [loop_until]: kubectl --namespace=xlou top node 19:42:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:42:31 INFO [loop_until]: OK (rc = 0) 19:42:31 DEBUG --- stdout --- 19:42:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 5566Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5711Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2482m 15% 5095Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 320m 2% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2959m 18% 4673Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3223m 20% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14147Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14214Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 300m 1% 2070Mi 3% 19:42:31 DEBUG --- stderr --- 19:42:31 DEBUG 19:43:28 INFO 19:43:28 INFO [loop_until]: kubectl --namespace=xlou top pods 19:43:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:43:29 INFO [loop_until]: OK (rc = 0) 19:43:29 DEBUG --- stdout --- 19:43:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 6m 4628Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 413Mi ds-cts-1 5m 383Mi ds-cts-2 5m 373Mi ds-idrepo-0 3128m 13804Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 2244m 3730Mi idm-65858d8c4c-8ff69 2425m 3822Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 225m 545Mi 19:43:29 DEBUG --- stderr --- 19:43:29 DEBUG 19:43:31 INFO 19:43:31 INFO [loop_until]: kubectl --namespace=xlou top node 19:43:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:43:31 INFO [loop_until]: OK (rc = 0) 19:43:31 DEBUG --- stdout --- 19:43:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5569Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5680Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2506m 15% 5097Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 328m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2823m 17% 4678Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 51m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3079m 19% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 14147Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14214Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 297m 1% 2069Mi 3% 19:43:31 DEBUG --- stderr --- 19:43:31 DEBUG 19:44:29 INFO 19:44:29 INFO [loop_until]: kubectl --namespace=xlou top pods 19:44:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:44:29 INFO [loop_until]: OK (rc = 0) 19:44:29 DEBUG --- stdout --- 19:44:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 7m 4628Mi am-55f77847b7-q6zcv 7m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 5m 383Mi ds-cts-2 5m 373Mi ds-idrepo-0 3013m 13808Mi ds-idrepo-1 11m 13636Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3054m 3733Mi idm-65858d8c4c-8ff69 2198m 3824Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 232m 544Mi 19:44:29 DEBUG --- stderr --- 19:44:29 DEBUG 19:44:31 INFO 19:44:31 INFO [loop_until]: kubectl --namespace=xlou top node 19:44:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:44:31 INFO [loop_until]: OK (rc = 0) 19:44:31 DEBUG --- stdout --- 19:44:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5568Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5711Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5678Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2203m 13% 5095Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 326m 2% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3109m 19% 4680Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3087m 19% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14150Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14216Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 299m 1% 2071Mi 3% 19:44:31 DEBUG --- stderr --- 19:44:31 DEBUG 19:45:29 INFO 19:45:29 INFO [loop_until]: kubectl --namespace=xlou top pods 19:45:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:45:29 INFO [loop_until]: OK (rc = 0) 19:45:29 DEBUG --- stdout --- 19:45:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 6m 4628Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 3656m 13803Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 12m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3180m 3737Mi idm-65858d8c4c-8ff69 2553m 3827Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 253m 545Mi 19:45:29 DEBUG --- stderr --- 19:45:29 DEBUG 19:45:31 INFO 19:45:31 INFO [loop_until]: kubectl --namespace=xlou top node 19:45:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:45:31 INFO [loop_until]: OK (rc = 0) 19:45:31 DEBUG --- stdout --- 19:45:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5567Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 5710Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5676Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2734m 17% 5100Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 340m 2% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3310m 20% 4686Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3671m 23% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14151Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14218Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 322m 2% 2068Mi 3% 19:45:31 DEBUG --- stderr --- 19:45:31 DEBUG 19:46:29 INFO 19:46:29 INFO [loop_until]: kubectl --namespace=xlou top pods 19:46:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:46:29 INFO [loop_until]: OK (rc = 0) 19:46:29 DEBUG --- stdout --- 19:46:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 6m 4628Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 5m 383Mi ds-cts-2 6m 373Mi ds-idrepo-0 3509m 13801Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3330m 3741Mi idm-65858d8c4c-8ff69 2510m 3830Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 247m 545Mi 19:46:29 DEBUG --- stderr --- 19:46:29 DEBUG 19:46:31 INFO 19:46:31 INFO [loop_until]: kubectl --namespace=xlou top node 19:46:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:46:31 INFO [loop_until]: OK (rc = 0) 19:46:31 DEBUG --- stdout --- 19:46:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5569Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5691Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2523m 15% 5108Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 322m 2% 2136Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3360m 21% 4689Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3625m 22% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14149Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14214Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 319m 2% 2068Mi 3% 19:46:31 DEBUG --- stderr --- 19:46:31 DEBUG 19:47:29 INFO 19:47:29 INFO [loop_until]: kubectl --namespace=xlou top pods 19:47:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:47:29 INFO [loop_until]: OK (rc = 0) 19:47:29 DEBUG --- stdout --- 19:47:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 6m 4628Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 414Mi ds-cts-1 6m 383Mi ds-cts-2 6m 374Mi ds-idrepo-0 3374m 13812Mi ds-idrepo-1 12m 13636Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2921m 3744Mi idm-65858d8c4c-8ff69 2771m 3834Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 248m 545Mi 19:47:29 DEBUG --- stderr --- 19:47:29 DEBUG 19:47:31 INFO 19:47:31 INFO [loop_until]: kubectl --namespace=xlou top node 19:47:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:47:32 INFO [loop_until]: OK (rc = 0) 19:47:32 DEBUG --- stdout --- 19:47:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5569Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2911m 18% 5111Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 341m 2% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3138m 19% 4689Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3520m 22% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14158Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 14222Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 311m 1% 2069Mi 3% 19:47:32 DEBUG --- stderr --- 19:47:32 DEBUG 19:48:29 INFO 19:48:29 INFO [loop_until]: kubectl --namespace=xlou top pods 19:48:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:48:29 INFO [loop_until]: OK (rc = 0) 19:48:29 DEBUG --- stdout --- 19:48:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4608Mi am-55f77847b7-ngpns 6m 4628Mi am-55f77847b7-q6zcv 7m 4639Mi ds-cts-0 6m 413Mi ds-cts-1 5m 383Mi ds-cts-2 5m 373Mi ds-idrepo-0 3329m 13807Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2800m 3746Mi idm-65858d8c4c-8ff69 2655m 3836Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 244m 546Mi 19:48:29 DEBUG --- stderr --- 19:48:29 DEBUG 19:48:32 INFO 19:48:32 INFO [loop_until]: kubectl --namespace=xlou top node 19:48:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:48:32 INFO [loop_until]: OK (rc = 0) 19:48:32 DEBUG --- stdout --- 19:48:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5566Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5716Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5682Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2615m 16% 5122Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 328m 2% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2772m 17% 4692Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3326m 20% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14149Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14216Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 311m 1% 2070Mi 3% 19:48:32 DEBUG --- stderr --- 19:48:32 DEBUG 19:49:29 INFO 19:49:29 INFO [loop_until]: kubectl --namespace=xlou top pods 19:49:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:49:29 INFO [loop_until]: OK (rc = 0) 19:49:29 DEBUG --- stdout --- 19:49:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 7m 4628Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 5m 383Mi ds-cts-2 6m 373Mi ds-idrepo-0 3255m 13801Mi ds-idrepo-1 13m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3141m 3751Mi idm-65858d8c4c-8ff69 2280m 3839Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 250m 546Mi 19:49:29 DEBUG --- stderr --- 19:49:29 DEBUG 19:49:32 INFO 19:49:32 INFO [loop_until]: kubectl --namespace=xlou top node 19:49:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:49:32 INFO [loop_until]: OK (rc = 0) 19:49:32 DEBUG --- stdout --- 19:49:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5570Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5684Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2353m 14% 5112Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 336m 2% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3333m 20% 4697Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3380m 21% 14390Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14148Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14219Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 320m 2% 2081Mi 3% 19:49:32 DEBUG --- stderr --- 19:49:32 DEBUG 19:50:29 INFO 19:50:29 INFO [loop_until]: kubectl --namespace=xlou top pods 19:50:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:50:29 INFO [loop_until]: OK (rc = 0) 19:50:29 DEBUG --- stdout --- 19:50:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 5m 4628Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 5m 414Mi ds-cts-1 5m 383Mi ds-cts-2 5m 373Mi ds-idrepo-0 3301m 13828Mi ds-idrepo-1 10m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2863m 3754Mi idm-65858d8c4c-8ff69 2458m 3842Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 234m 546Mi 19:50:29 DEBUG --- stderr --- 19:50:29 DEBUG 19:50:32 INFO 19:50:32 INFO [loop_until]: kubectl --namespace=xlou top node 19:50:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:50:32 INFO [loop_until]: OK (rc = 0) 19:50:32 DEBUG --- stdout --- 19:50:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 70m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 5577Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5711Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5680Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2424m 15% 5119Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 326m 2% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2955m 18% 4700Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3250m 20% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14150Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14216Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 307m 1% 2071Mi 3% 19:50:32 DEBUG --- stderr --- 19:50:32 DEBUG 19:51:29 INFO 19:51:29 INFO [loop_until]: kubectl --namespace=xlou top pods 19:51:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:51:29 INFO [loop_until]: OK (rc = 0) 19:51:29 DEBUG --- stdout --- 19:51:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 6m 4628Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 413Mi ds-cts-1 5m 383Mi ds-cts-2 5m 373Mi ds-idrepo-0 3303m 13801Mi ds-idrepo-1 10m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3087m 3757Mi idm-65858d8c4c-8ff69 2380m 3864Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 241m 546Mi 19:51:29 DEBUG --- stderr --- 19:51:29 DEBUG 19:51:32 INFO 19:51:32 INFO [loop_until]: kubectl --namespace=xlou top node 19:51:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:51:32 INFO [loop_until]: OK (rc = 0) 19:51:32 DEBUG --- stdout --- 19:51:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5567Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5682Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2423m 15% 5140Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 327m 2% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3165m 19% 4705Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3349m 21% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14151Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14219Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 306m 1% 2070Mi 3% 19:51:32 DEBUG --- stderr --- 19:51:32 DEBUG 19:52:29 INFO 19:52:29 INFO [loop_until]: kubectl --namespace=xlou top pods 19:52:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:52:29 INFO [loop_until]: OK (rc = 0) 19:52:29 DEBUG --- stdout --- 19:52:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4608Mi am-55f77847b7-ngpns 8m 4628Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 414Mi ds-cts-1 5m 383Mi ds-cts-2 5m 373Mi ds-idrepo-0 3311m 13804Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2833m 3761Mi idm-65858d8c4c-8ff69 2626m 3848Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 256m 547Mi 19:52:29 DEBUG --- stderr --- 19:52:29 DEBUG 19:52:32 INFO 19:52:32 INFO [loop_until]: kubectl --namespace=xlou top node 19:52:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:52:32 INFO [loop_until]: OK (rc = 0) 19:52:32 DEBUG --- stdout --- 19:52:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1267Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5568Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5711Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2693m 16% 5123Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 335m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2931m 18% 4707Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3495m 21% 14404Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14152Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14217Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 317m 1% 2071Mi 3% 19:52:32 DEBUG --- stderr --- 19:52:32 DEBUG 19:53:29 INFO 19:53:29 INFO [loop_until]: kubectl --namespace=xlou top pods 19:53:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:53:29 INFO [loop_until]: OK (rc = 0) 19:53:29 DEBUG --- stdout --- 19:53:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4608Mi am-55f77847b7-ngpns 6m 4628Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 5m 414Mi ds-cts-1 5m 383Mi ds-cts-2 5m 374Mi ds-idrepo-0 3183m 13805Mi ds-idrepo-1 17m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3181m 3764Mi idm-65858d8c4c-8ff69 2500m 3851Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 237m 547Mi 19:53:29 DEBUG --- stderr --- 19:53:29 DEBUG 19:53:32 INFO 19:53:32 INFO [loop_until]: kubectl --namespace=xlou top node 19:53:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:53:32 INFO [loop_until]: OK (rc = 0) 19:53:32 DEBUG --- stdout --- 19:53:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5566Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5713Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5682Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2635m 16% 5126Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 331m 2% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3141m 19% 4707Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3515m 22% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14151Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 69m 0% 14217Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 314m 1% 2072Mi 3% 19:53:32 DEBUG --- stderr --- 19:53:32 DEBUG 19:54:30 INFO 19:54:30 INFO [loop_until]: kubectl --namespace=xlou top pods 19:54:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:54:30 INFO [loop_until]: OK (rc = 0) 19:54:30 DEBUG --- stdout --- 19:54:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 8m 4628Mi am-55f77847b7-q6zcv 9m 4639Mi ds-cts-0 5m 414Mi ds-cts-1 5m 383Mi ds-cts-2 6m 373Mi ds-idrepo-0 3467m 13822Mi ds-idrepo-1 13m 13635Mi ds-idrepo-2 16m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2979m 3766Mi idm-65858d8c4c-8ff69 2534m 3854Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 263m 547Mi 19:54:30 DEBUG --- stderr --- 19:54:30 DEBUG 19:54:32 INFO 19:54:32 INFO [loop_until]: kubectl --namespace=xlou top node 19:54:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:54:32 INFO [loop_until]: OK (rc = 0) 19:54:32 DEBUG --- stdout --- 19:54:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5567Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5708Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2693m 16% 5128Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 337m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3106m 19% 4710Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3401m 21% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 14150Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14221Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 304m 1% 2071Mi 3% 19:54:32 DEBUG --- stderr --- 19:54:32 DEBUG 19:55:30 INFO 19:55:30 INFO [loop_until]: kubectl --namespace=xlou top pods 19:55:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:55:30 INFO [loop_until]: OK (rc = 0) 19:55:30 DEBUG --- stdout --- 19:55:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4608Mi am-55f77847b7-ngpns 6m 4628Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 6m 413Mi ds-cts-1 5m 383Mi ds-cts-2 6m 373Mi ds-idrepo-0 3485m 13800Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3318m 3773Mi idm-65858d8c4c-8ff69 2495m 3856Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 249m 548Mi 19:55:30 DEBUG --- stderr --- 19:55:30 DEBUG 19:55:32 INFO 19:55:32 INFO [loop_until]: kubectl --namespace=xlou top node 19:55:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:55:32 INFO [loop_until]: OK (rc = 0) 19:55:32 DEBUG --- stdout --- 19:55:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5568Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5709Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5680Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2516m 15% 5131Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 350m 2% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3165m 19% 4720Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3703m 23% 14403Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14149Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14223Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 324m 2% 2073Mi 3% 19:55:32 DEBUG --- stderr --- 19:55:32 DEBUG 19:56:30 INFO 19:56:30 INFO [loop_until]: kubectl --namespace=xlou top pods 19:56:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:56:30 INFO [loop_until]: OK (rc = 0) 19:56:30 DEBUG --- stdout --- 19:56:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4608Mi am-55f77847b7-ngpns 6m 4628Mi am-55f77847b7-q6zcv 7m 4639Mi ds-cts-0 6m 413Mi ds-cts-1 6m 384Mi ds-cts-2 5m 373Mi ds-idrepo-0 3474m 13812Mi ds-idrepo-1 14m 13636Mi ds-idrepo-2 34m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3209m 3775Mi idm-65858d8c4c-8ff69 2483m 3862Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 245m 548Mi 19:56:30 DEBUG --- stderr --- 19:56:30 DEBUG 19:56:33 INFO 19:56:33 INFO [loop_until]: kubectl --namespace=xlou top node 19:56:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:56:33 INFO [loop_until]: OK (rc = 0) 19:56:33 DEBUG --- stdout --- 19:56:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1260Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5570Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5715Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2763m 17% 5139Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 343m 2% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3224m 20% 4719Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3450m 21% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 86m 0% 14150Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14223Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 329m 2% 2070Mi 3% 19:56:33 DEBUG --- stderr --- 19:56:33 DEBUG 19:57:30 INFO 19:57:30 INFO [loop_until]: kubectl --namespace=xlou top pods 19:57:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:57:30 INFO [loop_until]: OK (rc = 0) 19:57:30 DEBUG --- stdout --- 19:57:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 8m 4628Mi am-55f77847b7-q6zcv 5m 4639Mi ds-cts-0 6m 414Mi ds-cts-1 5m 383Mi ds-cts-2 6m 374Mi ds-idrepo-0 3269m 13822Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 2994m 3778Mi idm-65858d8c4c-8ff69 2411m 3865Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 242m 548Mi 19:57:30 DEBUG --- stderr --- 19:57:30 DEBUG 19:57:33 INFO 19:57:33 INFO [loop_until]: kubectl --namespace=xlou top node 19:57:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:57:33 INFO [loop_until]: OK (rc = 0) 19:57:33 DEBUG --- stdout --- 19:57:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 5571Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5679Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2566m 16% 5142Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 333m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3086m 19% 4720Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3400m 21% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14151Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14224Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 310m 1% 2074Mi 3% 19:57:33 DEBUG --- stderr --- 19:57:33 DEBUG 19:58:30 INFO 19:58:30 INFO [loop_until]: kubectl --namespace=xlou top pods 19:58:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:58:30 INFO [loop_until]: OK (rc = 0) 19:58:30 DEBUG --- stdout --- 19:58:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4608Mi am-55f77847b7-ngpns 6m 4628Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 6m 413Mi ds-cts-1 5m 384Mi ds-cts-2 6m 373Mi ds-idrepo-0 3364m 13804Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 12m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3095m 3781Mi idm-65858d8c4c-8ff69 2345m 3868Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 242m 549Mi 19:58:30 DEBUG --- stderr --- 19:58:30 DEBUG 19:58:33 INFO 19:58:33 INFO [loop_until]: kubectl --namespace=xlou top node 19:58:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:58:33 INFO [loop_until]: OK (rc = 0) 19:58:33 DEBUG --- stdout --- 19:58:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1258Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5570Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5715Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5685Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2423m 15% 5141Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 333m 2% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3389m 21% 4722Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3424m 21% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14151Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14221Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 317m 1% 2073Mi 3% 19:58:33 DEBUG --- stderr --- 19:58:33 DEBUG 19:59:30 INFO 19:59:30 INFO [loop_until]: kubectl --namespace=xlou top pods 19:59:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:59:30 INFO [loop_until]: OK (rc = 0) 19:59:30 DEBUG --- stdout --- 19:59:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4608Mi am-55f77847b7-ngpns 13m 4630Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 6m 413Mi ds-cts-1 6m 383Mi ds-cts-2 6m 373Mi ds-idrepo-0 140m 13800Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 9m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 731m 3782Mi idm-65858d8c4c-8ff69 6m 3868Mi lodemon-56989b88bb-nm2fw 2m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 84m 547Mi 19:59:30 DEBUG --- stderr --- 19:59:30 DEBUG 19:59:33 INFO 19:59:33 INFO [loop_until]: kubectl --namespace=xlou top node 19:59:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:59:33 INFO [loop_until]: OK (rc = 0) 19:59:33 DEBUG --- stdout --- 19:59:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5569Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5713Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 5143Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 4724Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 14153Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14221Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 146m 0% 1639Mi 2% 19:59:33 DEBUG --- stderr --- 19:59:33 DEBUG 20:00:30 INFO 20:00:30 INFO [loop_until]: kubectl --namespace=xlou top pods 20:00:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:00:30 INFO [loop_until]: OK (rc = 0) 20:00:30 DEBUG --- stdout --- 20:00:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 7m 4630Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 6m 414Mi ds-cts-1 5m 383Mi ds-cts-2 6m 373Mi ds-idrepo-0 10m 13800Mi ds-idrepo-1 12m 13636Mi ds-idrepo-2 9m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 5m 3781Mi idm-65858d8c4c-8ff69 6m 3868Mi lodemon-56989b88bb-nm2fw 1m 67Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 105Mi 20:00:30 DEBUG --- stderr --- 20:00:30 DEBUG 20:00:33 INFO 20:00:33 INFO [loop_until]: kubectl --namespace=xlou top node 20:00:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:00:33 INFO [loop_until]: OK (rc = 0) 20:00:33 DEBUG --- stdout --- 20:00:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5571Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 5142Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 4725Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14383Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 14155Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14220Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 70m 0% 1640Mi 2% 20:00:33 DEBUG --- stderr --- 20:00:33 DEBUG 127.0.0.1 - - [11/Aug/2023 20:01:04] "GET /monitoring/average?start_time=23-08-11_18:30:39&stop_time=23-08-11_18:59:03 HTTP/1.1" 200 - 20:01:30 INFO 20:01:30 INFO [loop_until]: kubectl --namespace=xlou top pods 20:01:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:01:30 INFO [loop_until]: OK (rc = 0) 20:01:30 DEBUG --- stdout --- 20:01:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4608Mi am-55f77847b7-ngpns 6m 4630Mi am-55f77847b7-q6zcv 23m 4640Mi ds-cts-0 6m 413Mi ds-cts-1 5m 384Mi ds-cts-2 6m 373Mi ds-idrepo-0 11m 13801Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 10m 13570Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 5m 3781Mi idm-65858d8c4c-8ff69 5m 3868Mi lodemon-56989b88bb-nm2fw 4m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 121m 233Mi 20:01:30 DEBUG --- stderr --- 20:01:30 DEBUG 20:01:33 INFO 20:01:33 INFO [loop_until]: kubectl --namespace=xlou top node 20:01:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:01:33 INFO [loop_until]: OK (rc = 0) 20:01:33 DEBUG --- stdout --- 20:01:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5571Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5682Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 5143Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 136m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 4728Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 67m 0% 14383Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 14154Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14223Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 710m 4% 2001Mi 3% 20:01:33 DEBUG --- stderr --- 20:01:33 DEBUG 20:02:30 INFO 20:02:30 INFO [loop_until]: kubectl --namespace=xlou top pods 20:02:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:02:30 INFO [loop_until]: OK (rc = 0) 20:02:30 DEBUG --- stdout --- 20:02:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4608Mi am-55f77847b7-ngpns 6m 4630Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 5m 413Mi ds-cts-1 5m 383Mi ds-cts-2 5m 375Mi ds-idrepo-0 3835m 13803Mi ds-idrepo-1 11m 13636Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 3642m 3788Mi idm-65858d8c4c-8ff69 2680m 3893Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 431m 522Mi 20:02:30 DEBUG --- stderr --- 20:02:30 DEBUG 20:02:33 INFO 20:02:33 INFO [loop_until]: kubectl --namespace=xlou top node 20:02:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:02:33 INFO [loop_until]: OK (rc = 0) 20:02:33 DEBUG --- stdout --- 20:02:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5570Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5711Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5682Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2907m 18% 5167Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 363m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3808m 23% 4737Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4039m 25% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14152Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14221Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 494m 3% 2048Mi 3% 20:02:33 DEBUG --- stderr --- 20:02:33 DEBUG 20:03:30 INFO 20:03:30 INFO [loop_until]: kubectl --namespace=xlou top pods 20:03:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:03:30 INFO [loop_until]: OK (rc = 0) 20:03:30 DEBUG --- stdout --- 20:03:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 7m 4630Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 6m 413Mi ds-cts-1 5m 383Mi ds-cts-2 5m 375Mi ds-idrepo-0 4065m 13813Mi ds-idrepo-1 12m 13636Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 3727m 3792Mi idm-65858d8c4c-8ff69 2929m 3875Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 350m 531Mi 20:03:30 DEBUG --- stderr --- 20:03:30 DEBUG 20:03:33 INFO 20:03:33 INFO [loop_until]: kubectl --namespace=xlou top node 20:03:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:03:33 INFO [loop_until]: OK (rc = 0) 20:03:33 DEBUG --- stdout --- 20:03:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5570Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5715Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5684Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3201m 20% 5146Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 363m 2% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3877m 24% 4739Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4171m 26% 14410Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14154Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14221Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 415m 2% 2055Mi 3% 20:03:33 DEBUG --- stderr --- 20:03:33 DEBUG 20:04:30 INFO 20:04:30 INFO [loop_until]: kubectl --namespace=xlou top pods 20:04:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:04:31 INFO [loop_until]: OK (rc = 0) 20:04:31 DEBUG --- stdout --- 20:04:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4608Mi am-55f77847b7-ngpns 8m 4630Mi am-55f77847b7-q6zcv 7m 4640Mi ds-cts-0 6m 413Mi ds-cts-1 5m 383Mi ds-cts-2 6m 374Mi ds-idrepo-0 3710m 13810Mi ds-idrepo-1 11m 13636Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3655m 3797Mi idm-65858d8c4c-8ff69 2698m 3878Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 281m 530Mi 20:04:31 DEBUG --- stderr --- 20:04:31 DEBUG 20:04:34 INFO 20:04:34 INFO [loop_until]: kubectl --namespace=xlou top node 20:04:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:04:34 INFO [loop_until]: OK (rc = 0) 20:04:34 DEBUG --- stdout --- 20:04:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5568Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5710Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2661m 16% 5152Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 366m 2% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3809m 23% 4744Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 51m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3991m 25% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14154Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14224Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 341m 2% 2053Mi 3% 20:04:34 DEBUG --- stderr --- 20:04:34 DEBUG 20:05:31 INFO 20:05:31 INFO [loop_until]: kubectl --namespace=xlou top pods 20:05:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:05:31 INFO [loop_until]: OK (rc = 0) 20:05:31 DEBUG --- stdout --- 20:05:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 5m 4608Mi am-55f77847b7-ngpns 6m 4630Mi am-55f77847b7-q6zcv 6m 4639Mi ds-cts-0 5m 413Mi ds-cts-1 5m 384Mi ds-cts-2 6m 374Mi ds-idrepo-0 3948m 13805Mi ds-idrepo-1 12m 13636Mi ds-idrepo-2 16m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3552m 3799Mi idm-65858d8c4c-8ff69 3078m 3881Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 282m 532Mi 20:05:31 DEBUG --- stderr --- 20:05:31 DEBUG 20:05:34 INFO 20:05:34 INFO [loop_until]: kubectl --namespace=xlou top node 20:05:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:05:34 INFO [loop_until]: OK (rc = 0) 20:05:34 DEBUG --- stdout --- 20:05:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5568Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5684Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3240m 20% 5157Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 366m 2% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3778m 23% 4747Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4027m 25% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14157Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14218Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 353m 2% 2056Mi 3% 20:05:34 DEBUG --- stderr --- 20:05:34 DEBUG 20:06:31 INFO 20:06:31 INFO [loop_until]: kubectl --namespace=xlou top pods 20:06:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:06:31 INFO [loop_until]: OK (rc = 0) 20:06:31 DEBUG --- stdout --- 20:06:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 7m 4630Mi am-55f77847b7-q6zcv 28m 4640Mi ds-cts-0 6m 413Mi ds-cts-1 5m 384Mi ds-cts-2 5m 374Mi ds-idrepo-0 4336m 13812Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 22m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3856m 3804Mi idm-65858d8c4c-8ff69 3012m 3885Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 295m 532Mi 20:06:31 DEBUG --- stderr --- 20:06:31 DEBUG 20:06:34 INFO 20:06:34 INFO [loop_until]: kubectl --namespace=xlou top node 20:06:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:06:34 INFO [loop_until]: OK (rc = 0) 20:06:34 DEBUG --- stdout --- 20:06:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5569Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 87m 0% 5725Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5683Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3225m 20% 5160Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 360m 2% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4221m 26% 4750Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4410m 27% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 72m 0% 14158Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14220Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 355m 2% 2057Mi 3% 20:06:34 DEBUG --- stderr --- 20:06:34 DEBUG 20:07:31 INFO 20:07:31 INFO [loop_until]: kubectl --namespace=xlou top pods 20:07:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:07:31 INFO [loop_until]: OK (rc = 0) 20:07:31 DEBUG --- stdout --- 20:07:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4608Mi am-55f77847b7-ngpns 6m 4630Mi am-55f77847b7-q6zcv 11m 4640Mi ds-cts-0 6m 413Mi ds-cts-1 5m 384Mi ds-cts-2 6m 374Mi ds-idrepo-0 4048m 13822Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 12m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3985m 3809Mi idm-65858d8c4c-8ff69 2697m 3887Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 284m 532Mi 20:07:31 DEBUG --- stderr --- 20:07:31 DEBUG 20:07:34 INFO 20:07:34 INFO [loop_until]: kubectl --namespace=xlou top node 20:07:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:07:34 INFO [loop_until]: OK (rc = 0) 20:07:34 DEBUG --- stdout --- 20:07:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1268Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5568Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5683Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2603m 16% 5160Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 368m 2% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3990m 25% 4753Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3991m 25% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 14156Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14223Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 352m 2% 2057Mi 3% 20:07:34 DEBUG --- stderr --- 20:07:34 DEBUG 20:08:31 INFO 20:08:31 INFO [loop_until]: kubectl --namespace=xlou top pods 20:08:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:08:31 INFO [loop_until]: OK (rc = 0) 20:08:31 DEBUG --- stdout --- 20:08:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 7m 4630Mi am-55f77847b7-q6zcv 7m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 5m 385Mi ds-cts-2 6m 374Mi ds-idrepo-0 3787m 13823Mi ds-idrepo-1 12m 13636Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3465m 3813Mi idm-65858d8c4c-8ff69 2830m 3890Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 285m 532Mi 20:08:31 DEBUG --- stderr --- 20:08:31 DEBUG 20:08:34 INFO 20:08:34 INFO [loop_until]: kubectl --namespace=xlou top node 20:08:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:08:34 INFO [loop_until]: OK (rc = 0) 20:08:34 DEBUG --- stdout --- 20:08:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5569Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5713Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5680Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2885m 18% 5163Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 351m 2% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3627m 22% 4758Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3897m 24% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14154Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14218Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 349m 2% 2057Mi 3% 20:08:34 DEBUG --- stderr --- 20:08:34 DEBUG 20:09:31 INFO 20:09:31 INFO [loop_until]: kubectl --namespace=xlou top pods 20:09:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:09:31 INFO [loop_until]: OK (rc = 0) 20:09:31 DEBUG --- stdout --- 20:09:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4608Mi am-55f77847b7-ngpns 7m 4630Mi am-55f77847b7-q6zcv 9m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 5m 384Mi ds-cts-2 6m 374Mi ds-idrepo-0 3846m 13804Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3928m 3816Mi idm-65858d8c4c-8ff69 2610m 3893Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 279m 533Mi 20:09:31 DEBUG --- stderr --- 20:09:31 DEBUG 20:09:34 INFO 20:09:34 INFO [loop_until]: kubectl --namespace=xlou top node 20:09:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:09:34 INFO [loop_until]: OK (rc = 0) 20:09:34 DEBUG --- stdout --- 20:09:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5570Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5715Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2707m 17% 5162Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 360m 2% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3814m 24% 4760Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3882m 24% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14154Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14225Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 345m 2% 2056Mi 3% 20:09:34 DEBUG --- stderr --- 20:09:34 DEBUG 20:10:31 INFO 20:10:31 INFO [loop_until]: kubectl --namespace=xlou top pods 20:10:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:10:31 INFO [loop_until]: OK (rc = 0) 20:10:31 DEBUG --- stdout --- 20:10:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4608Mi am-55f77847b7-ngpns 7m 4630Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 5m 414Mi ds-cts-1 5m 383Mi ds-cts-2 6m 375Mi ds-idrepo-0 3874m 13822Mi ds-idrepo-1 11m 13635Mi ds-idrepo-2 11m 13570Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3511m 3819Mi idm-65858d8c4c-8ff69 2992m 3897Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 274m 534Mi 20:10:31 DEBUG --- stderr --- 20:10:31 DEBUG 20:10:34 INFO 20:10:34 INFO [loop_until]: kubectl --namespace=xlou top node 20:10:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:10:34 INFO [loop_until]: OK (rc = 0) 20:10:34 DEBUG --- stdout --- 20:10:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5569Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5716Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5682Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3162m 19% 5168Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 362m 2% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3458m 21% 4763Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 51m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4014m 25% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 14155Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14223Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 348m 2% 2059Mi 3% 20:10:34 DEBUG --- stderr --- 20:10:34 DEBUG 20:11:31 INFO 20:11:31 INFO [loop_until]: kubectl --namespace=xlou top pods 20:11:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:11:31 INFO [loop_until]: OK (rc = 0) 20:11:31 DEBUG --- stdout --- 20:11:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 10m 4630Mi am-55f77847b7-q6zcv 8m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 4287m 13806Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 11m 13570Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 3720m 3823Mi idm-65858d8c4c-8ff69 3369m 3901Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 283m 534Mi 20:11:31 DEBUG --- stderr --- 20:11:31 DEBUG 20:11:34 INFO 20:11:34 INFO [loop_until]: kubectl --namespace=xlou top node 20:11:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:11:34 INFO [loop_until]: OK (rc = 0) 20:11:34 DEBUG --- stdout --- 20:11:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5571Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5682Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3438m 21% 5171Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 376m 2% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3604m 22% 4789Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4264m 26% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14157Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14225Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 364m 2% 2061Mi 3% 20:11:34 DEBUG --- stderr --- 20:11:34 DEBUG 20:12:31 INFO 20:12:31 INFO [loop_until]: kubectl --namespace=xlou top pods 20:12:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:12:31 INFO [loop_until]: OK (rc = 0) 20:12:31 DEBUG --- stdout --- 20:12:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4608Mi am-55f77847b7-ngpns 6m 4627Mi am-55f77847b7-q6zcv 8m 4640Mi ds-cts-0 5m 414Mi ds-cts-1 5m 384Mi ds-cts-2 6m 374Mi ds-idrepo-0 3809m 13800Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3594m 3832Mi idm-65858d8c4c-8ff69 2640m 3903Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 271m 535Mi 20:12:31 DEBUG --- stderr --- 20:12:31 DEBUG 20:12:34 INFO 20:12:34 INFO [loop_until]: kubectl --namespace=xlou top node 20:12:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:12:35 INFO [loop_until]: OK (rc = 0) 20:12:35 DEBUG --- stdout --- 20:12:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5568Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5680Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2756m 17% 5173Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 364m 2% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3555m 22% 4774Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3784m 23% 14397Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14156Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14221Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 343m 2% 2060Mi 3% 20:12:35 DEBUG --- stderr --- 20:12:35 DEBUG 20:13:31 INFO 20:13:31 INFO [loop_until]: kubectl --namespace=xlou top pods 20:13:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:13:31 INFO [loop_until]: OK (rc = 0) 20:13:31 DEBUG --- stdout --- 20:13:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4608Mi am-55f77847b7-ngpns 8m 4627Mi am-55f77847b7-q6zcv 8m 4640Mi ds-cts-0 5m 414Mi ds-cts-1 5m 384Mi ds-cts-2 6m 376Mi ds-idrepo-0 4266m 13801Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 4405m 3836Mi idm-65858d8c4c-8ff69 2895m 3906Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 311m 535Mi 20:13:31 DEBUG --- stderr --- 20:13:31 DEBUG 20:13:35 INFO 20:13:35 INFO [loop_until]: kubectl --namespace=xlou top node 20:13:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:13:35 INFO [loop_until]: OK (rc = 0) 20:13:35 DEBUG --- stdout --- 20:13:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5569Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5713Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2923m 18% 5175Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 378m 2% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4662m 29% 4778Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4481m 28% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14156Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 71m 0% 14221Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 379m 2% 2061Mi 3% 20:13:35 DEBUG --- stderr --- 20:13:35 DEBUG 20:14:31 INFO 20:14:31 INFO [loop_until]: kubectl --namespace=xlou top pods 20:14:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:14:32 INFO [loop_until]: OK (rc = 0) 20:14:32 DEBUG --- stdout --- 20:14:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 11m 4608Mi am-55f77847b7-ngpns 6m 4627Mi am-55f77847b7-q6zcv 7m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 5m 385Mi ds-cts-2 6m 375Mi ds-idrepo-0 3819m 13803Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3461m 3840Mi idm-65858d8c4c-8ff69 2860m 3916Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 312m 576Mi 20:14:32 DEBUG --- stderr --- 20:14:32 DEBUG 20:14:35 INFO 20:14:35 INFO [loop_until]: kubectl --namespace=xlou top node 20:14:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:14:35 INFO [loop_until]: OK (rc = 0) 20:14:35 DEBUG --- stdout --- 20:14:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1267Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5570Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5680Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3028m 19% 5187Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 358m 2% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3538m 22% 4785Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3861m 24% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14157Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14222Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 379m 2% 2101Mi 3% 20:14:35 DEBUG --- stderr --- 20:14:35 DEBUG 20:15:32 INFO 20:15:32 INFO [loop_until]: kubectl --namespace=xlou top pods 20:15:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:15:32 INFO [loop_until]: OK (rc = 0) 20:15:32 DEBUG --- stdout --- 20:15:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4603Mi am-55f77847b7-ngpns 19m 4639Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 4m 384Mi ds-cts-2 6m 374Mi ds-idrepo-0 4178m 13800Mi ds-idrepo-1 12m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3867m 3846Mi idm-65858d8c4c-8ff69 2972m 3919Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 277m 577Mi 20:15:32 DEBUG --- stderr --- 20:15:32 DEBUG 20:15:35 INFO 20:15:35 INFO [loop_until]: kubectl --namespace=xlou top node 20:15:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:15:35 INFO [loop_until]: OK (rc = 0) 20:15:35 DEBUG --- stdout --- 20:15:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1267Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5563Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5716Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 5694Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3073m 19% 5188Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 382m 2% 2130Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3749m 23% 4788Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4180m 26% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14155Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14223Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 352m 2% 2103Mi 3% 20:15:35 DEBUG --- stderr --- 20:15:35 DEBUG 20:16:32 INFO 20:16:32 INFO [loop_until]: kubectl --namespace=xlou top pods 20:16:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:16:32 INFO [loop_until]: OK (rc = 0) 20:16:32 DEBUG --- stdout --- 20:16:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 24m 4608Mi am-55f77847b7-ngpns 7m 4628Mi am-55f77847b7-q6zcv 5m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 5m 384Mi ds-cts-2 6m 374Mi ds-idrepo-0 3812m 13823Mi ds-idrepo-1 10m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 3156m 3873Mi idm-65858d8c4c-8ff69 3007m 3928Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 268m 576Mi 20:16:32 DEBUG --- stderr --- 20:16:32 DEBUG 20:16:35 INFO 20:16:35 INFO [loop_until]: kubectl --namespace=xlou top node 20:16:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:16:35 INFO [loop_until]: OK (rc = 0) 20:16:35 DEBUG --- stdout --- 20:16:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 85m 0% 5570Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5711Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5683Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3223m 20% 5193Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 365m 2% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3676m 23% 4815Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3837m 24% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14157Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14222Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 344m 2% 2103Mi 3% 20:16:35 DEBUG --- stderr --- 20:16:35 DEBUG 20:17:32 INFO 20:17:32 INFO [loop_until]: kubectl --namespace=xlou top pods 20:17:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:17:32 INFO [loop_until]: OK (rc = 0) 20:17:32 DEBUG --- stdout --- 20:17:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 22m 4611Mi am-55f77847b7-ngpns 20m 4632Mi am-55f77847b7-q6zcv 7m 4640Mi ds-cts-0 8m 414Mi ds-cts-1 6m 383Mi ds-cts-2 6m 374Mi ds-idrepo-0 3704m 13800Mi ds-idrepo-1 11m 13636Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 3694m 3849Mi idm-65858d8c4c-8ff69 2501m 3920Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 266m 576Mi 20:17:32 DEBUG --- stderr --- 20:17:32 DEBUG 20:17:35 INFO 20:17:35 INFO [loop_until]: kubectl --namespace=xlou top node 20:17:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:17:35 INFO [loop_until]: OK (rc = 0) 20:17:35 DEBUG --- stdout --- 20:17:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1267Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 80m 0% 5572Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5713Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2683m 16% 5194Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 372m 2% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3787m 23% 4791Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3906m 24% 14416Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 14156Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14225Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 331m 2% 2105Mi 3% 20:17:35 DEBUG --- stderr --- 20:17:35 DEBUG 20:18:32 INFO 20:18:32 INFO [loop_until]: kubectl --namespace=xlou top pods 20:18:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:18:32 INFO [loop_until]: OK (rc = 0) 20:18:32 DEBUG --- stdout --- 20:18:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 12m 4611Mi am-55f77847b7-ngpns 11m 4632Mi am-55f77847b7-q6zcv 7m 4640Mi ds-cts-0 6m 415Mi ds-cts-1 5m 384Mi ds-cts-2 5m 374Mi ds-idrepo-0 3909m 13822Mi ds-idrepo-1 9m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3647m 3857Mi idm-65858d8c4c-8ff69 2958m 3924Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 267m 577Mi 20:18:32 DEBUG --- stderr --- 20:18:32 DEBUG 20:18:35 INFO 20:18:35 INFO [loop_until]: kubectl --namespace=xlou top node 20:18:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:18:35 INFO [loop_until]: OK (rc = 0) 20:18:35 DEBUG --- stdout --- 20:18:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5574Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5685Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2959m 18% 5193Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 365m 2% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3860m 24% 4801Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4086m 25% 14394Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 14160Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14224Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 346m 2% 2103Mi 3% 20:18:35 DEBUG --- stderr --- 20:18:35 DEBUG 20:19:32 INFO 20:19:32 INFO [loop_until]: kubectl --namespace=xlou top pods 20:19:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:19:32 INFO [loop_until]: OK (rc = 0) 20:19:32 DEBUG --- stdout --- 20:19:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 12m 4611Mi am-55f77847b7-ngpns 9m 4632Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 5m 414Mi ds-cts-1 5m 384Mi ds-cts-2 6m 374Mi ds-idrepo-0 4007m 13806Mi ds-idrepo-1 10m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3645m 3861Mi idm-65858d8c4c-8ff69 2716m 3926Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 269m 577Mi 20:19:32 DEBUG --- stderr --- 20:19:32 DEBUG 20:19:35 INFO 20:19:35 INFO [loop_until]: kubectl --namespace=xlou top node 20:19:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:19:35 INFO [loop_until]: OK (rc = 0) 20:19:35 DEBUG --- stdout --- 20:19:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5570Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5685Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2894m 18% 5207Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 360m 2% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3793m 23% 4805Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3971m 24% 14396Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14157Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14226Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 343m 2% 2104Mi 3% 20:19:35 DEBUG --- stderr --- 20:19:35 DEBUG 20:20:32 INFO 20:20:32 INFO [loop_until]: kubectl --namespace=xlou top pods 20:20:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:20:32 INFO [loop_until]: OK (rc = 0) 20:20:32 DEBUG --- stdout --- 20:20:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 9m 4612Mi am-55f77847b7-ngpns 7m 4632Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 5m 384Mi ds-cts-2 6m 374Mi ds-idrepo-0 3986m 13800Mi ds-idrepo-1 8m 13636Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 4122m 3865Mi idm-65858d8c4c-8ff69 2607m 3930Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 265m 577Mi 20:20:32 DEBUG --- stderr --- 20:20:32 DEBUG 20:20:35 INFO 20:20:35 INFO [loop_until]: kubectl --namespace=xlou top node 20:20:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:20:35 INFO [loop_until]: OK (rc = 0) 20:20:35 DEBUG --- stdout --- 20:20:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5713Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2698m 16% 5201Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 363m 2% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3950m 24% 4809Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3962m 24% 14390Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14157Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14226Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 352m 2% 2100Mi 3% 20:20:35 DEBUG --- stderr --- 20:20:35 DEBUG 20:21:32 INFO 20:21:32 INFO [loop_until]: kubectl --namespace=xlou top pods 20:21:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:21:32 INFO [loop_until]: OK (rc = 0) 20:21:32 DEBUG --- stdout --- 20:21:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4612Mi am-55f77847b7-ngpns 6m 4632Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 5m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 3505m 13802Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 10m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3479m 3865Mi idm-65858d8c4c-8ff69 2879m 3934Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 256m 574Mi 20:21:32 DEBUG --- stderr --- 20:21:32 DEBUG 20:21:36 INFO 20:21:36 INFO [loop_until]: kubectl --namespace=xlou top node 20:21:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:21:36 INFO [loop_until]: OK (rc = 0) 20:21:36 DEBUG --- stdout --- 20:21:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5571Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5713Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5683Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3085m 19% 5202Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 346m 2% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3537m 22% 4809Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3743m 23% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14158Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14227Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 338m 2% 2102Mi 3% 20:21:36 DEBUG --- stderr --- 20:21:36 DEBUG 20:22:32 INFO 20:22:32 INFO [loop_until]: kubectl --namespace=xlou top pods 20:22:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:22:32 INFO [loop_until]: OK (rc = 0) 20:22:32 DEBUG --- stdout --- 20:22:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 7m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 3958m 13821Mi ds-idrepo-1 9m 13636Mi ds-idrepo-2 10m 13570Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3705m 3867Mi idm-65858d8c4c-8ff69 3091m 3940Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 282m 574Mi 20:22:32 DEBUG --- stderr --- 20:22:32 DEBUG 20:22:36 INFO 20:22:36 INFO [loop_until]: kubectl --namespace=xlou top node 20:22:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:22:36 INFO [loop_until]: OK (rc = 0) 20:22:36 DEBUG --- stdout --- 20:22:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 70m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5718Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5684Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3056m 19% 5208Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 369m 2% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3793m 23% 4810Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4135m 26% 14402Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14160Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14228Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 356m 2% 2100Mi 3% 20:22:36 DEBUG --- stderr --- 20:22:36 DEBUG 20:23:32 INFO 20:23:32 INFO [loop_until]: kubectl --namespace=xlou top pods 20:23:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:23:32 INFO [loop_until]: OK (rc = 0) 20:23:32 DEBUG --- stdout --- 20:23:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4611Mi am-55f77847b7-ngpns 7m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 5m 414Mi ds-cts-1 6m 383Mi ds-cts-2 6m 375Mi ds-idrepo-0 4068m 13805Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3786m 3873Mi idm-65858d8c4c-8ff69 2870m 3944Mi lodemon-56989b88bb-nm2fw 1m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 272m 575Mi 20:23:32 DEBUG --- stderr --- 20:23:32 DEBUG 20:23:36 INFO 20:23:36 INFO [loop_until]: kubectl --namespace=xlou top node 20:23:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:23:36 INFO [loop_until]: OK (rc = 0) 20:23:36 DEBUG --- stdout --- 20:23:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5717Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5698Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2844m 17% 5214Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 370m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3953m 24% 4815Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3937m 24% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14154Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14227Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 342m 2% 2102Mi 3% 20:23:36 DEBUG --- stderr --- 20:23:36 DEBUG 20:24:32 INFO 20:24:32 INFO [loop_until]: kubectl --namespace=xlou top pods 20:24:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:24:33 INFO [loop_until]: OK (rc = 0) 20:24:33 DEBUG --- stdout --- 20:24:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4611Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 4005m 13799Mi ds-idrepo-1 9m 13636Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3837m 3876Mi idm-65858d8c4c-8ff69 2839m 3948Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 266m 575Mi 20:24:33 DEBUG --- stderr --- 20:24:33 DEBUG 20:24:36 INFO 20:24:36 INFO [loop_until]: kubectl --namespace=xlou top node 20:24:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:24:36 INFO [loop_until]: OK (rc = 0) 20:24:36 DEBUG --- stdout --- 20:24:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5570Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5715Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3042m 19% 5219Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 371m 2% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3928m 24% 4819Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4054m 25% 14421Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14157Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14230Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 344m 2% 2102Mi 3% 20:24:36 DEBUG --- stderr --- 20:24:36 DEBUG 20:25:33 INFO 20:25:33 INFO [loop_until]: kubectl --namespace=xlou top pods 20:25:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:25:33 INFO [loop_until]: OK (rc = 0) 20:25:33 DEBUG --- stdout --- 20:25:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4611Mi am-55f77847b7-ngpns 7m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 6m 416Mi ds-cts-1 5m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 4160m 13800Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3944m 3880Mi idm-65858d8c4c-8ff69 2806m 3950Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 272m 575Mi 20:25:33 DEBUG --- stderr --- 20:25:33 DEBUG 20:25:36 INFO 20:25:36 INFO [loop_until]: kubectl --namespace=xlou top node 20:25:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:25:36 INFO [loop_until]: OK (rc = 0) 20:25:36 DEBUG --- stdout --- 20:25:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5571Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5716Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 5683Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2861m 18% 5219Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 387m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3846m 24% 4822Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4120m 25% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14155Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14227Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 348m 2% 2104Mi 3% 20:25:36 DEBUG --- stderr --- 20:25:36 DEBUG 20:26:33 INFO 20:26:33 INFO [loop_until]: kubectl --namespace=xlou top pods 20:26:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:26:33 INFO [loop_until]: OK (rc = 0) 20:26:33 DEBUG --- stdout --- 20:26:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4611Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 4211m 13822Mi ds-idrepo-1 9m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3827m 3884Mi idm-65858d8c4c-8ff69 3082m 3953Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 263m 576Mi 20:26:33 DEBUG --- stderr --- 20:26:33 DEBUG 20:26:36 INFO 20:26:36 INFO [loop_until]: kubectl --namespace=xlou top node 20:26:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:26:36 INFO [loop_until]: OK (rc = 0) 20:26:36 DEBUG --- stdout --- 20:26:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5572Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5713Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5684Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2906m 18% 5221Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 372m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4031m 25% 4826Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4151m 26% 14416Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14159Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14228Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 346m 2% 2103Mi 3% 20:26:36 DEBUG --- stderr --- 20:26:36 DEBUG 20:27:33 INFO 20:27:33 INFO [loop_until]: kubectl --namespace=xlou top pods 20:27:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:27:33 INFO [loop_until]: OK (rc = 0) 20:27:33 DEBUG --- stdout --- 20:27:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 5m 414Mi ds-cts-1 5m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 3969m 13822Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3839m 3888Mi idm-65858d8c4c-8ff69 2843m 3969Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 281m 579Mi 20:27:33 DEBUG --- stderr --- 20:27:33 DEBUG 20:27:36 INFO 20:27:36 INFO [loop_until]: kubectl --namespace=xlou top node 20:27:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:27:36 INFO [loop_until]: OK (rc = 0) 20:27:36 DEBUG --- stdout --- 20:27:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1267Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5583Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5717Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2957m 18% 5229Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 373m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3910m 24% 4830Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4173m 26% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14161Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14235Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 351m 2% 2107Mi 3% 20:27:36 DEBUG --- stderr --- 20:27:36 DEBUG 20:28:33 INFO 20:28:33 INFO [loop_until]: kubectl --namespace=xlou top pods 20:28:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:28:33 INFO [loop_until]: OK (rc = 0) 20:28:33 DEBUG --- stdout --- 20:28:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 5m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 4048m 13803Mi ds-idrepo-1 8m 13636Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3846m 3893Mi idm-65858d8c4c-8ff69 2895m 3959Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 271m 578Mi 20:28:33 DEBUG --- stderr --- 20:28:33 DEBUG 20:28:36 INFO 20:28:36 INFO [loop_until]: kubectl --namespace=xlou top node 20:28:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:28:36 INFO [loop_until]: OK (rc = 0) 20:28:36 DEBUG --- stdout --- 20:28:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5687Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3011m 18% 5228Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 368m 2% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3706m 23% 4833Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3904m 24% 14398Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 14167Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14227Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 349m 2% 2106Mi 3% 20:28:36 DEBUG --- stderr --- 20:28:36 DEBUG 20:29:33 INFO 20:29:33 INFO [loop_until]: kubectl --namespace=xlou top pods 20:29:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:29:33 INFO [loop_until]: OK (rc = 0) 20:29:33 DEBUG --- stdout --- 20:29:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 5m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 3646m 13812Mi ds-idrepo-1 10m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3771m 3897Mi idm-65858d8c4c-8ff69 2760m 3964Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 272m 580Mi 20:29:33 DEBUG --- stderr --- 20:29:33 DEBUG 20:29:36 INFO 20:29:36 INFO [loop_until]: kubectl --namespace=xlou top node 20:29:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:29:37 INFO [loop_until]: OK (rc = 0) 20:29:37 DEBUG --- stdout --- 20:29:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5571Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2770m 17% 5234Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 362m 2% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3659m 23% 4851Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4061m 25% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14157Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 56m 0% 14227Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 349m 2% 2107Mi 3% 20:29:37 DEBUG --- stderr --- 20:29:37 DEBUG 20:30:33 INFO 20:30:33 INFO [loop_until]: kubectl --namespace=xlou top pods 20:30:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:30:33 INFO [loop_until]: OK (rc = 0) 20:30:33 DEBUG --- stdout --- 20:30:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4611Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 5m 414Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 3956m 13822Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 3696m 3902Mi idm-65858d8c4c-8ff69 2682m 3966Mi lodemon-56989b88bb-nm2fw 1m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 266m 579Mi 20:30:33 DEBUG --- stderr --- 20:30:33 DEBUG 20:30:37 INFO 20:30:37 INFO [loop_until]: kubectl --namespace=xlou top node 20:30:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:30:37 INFO [loop_until]: OK (rc = 0) 20:30:37 DEBUG --- stdout --- 20:30:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5572Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5685Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2906m 18% 5236Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 349m 2% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3487m 21% 4844Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3892m 24% 14402Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14158Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14229Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 349m 2% 2106Mi 3% 20:30:37 DEBUG --- stderr --- 20:30:37 DEBUG 20:31:33 INFO 20:31:33 INFO [loop_until]: kubectl --namespace=xlou top pods 20:31:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:31:33 INFO [loop_until]: OK (rc = 0) 20:31:33 DEBUG --- stdout --- 20:31:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4611Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 8m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 3564m 13797Mi ds-idrepo-1 8m 13636Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2459m 3903Mi idm-65858d8c4c-8ff69 2722m 3969Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 234m 581Mi 20:31:33 DEBUG --- stderr --- 20:31:33 DEBUG 20:31:37 INFO 20:31:37 INFO [loop_until]: kubectl --namespace=xlou top node 20:31:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:31:37 INFO [loop_until]: OK (rc = 0) 20:31:37 DEBUG --- stdout --- 20:31:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5717Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5687Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2241m 14% 5241Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 331m 2% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3300m 20% 4846Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3197m 20% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14156Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14232Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 277m 1% 2111Mi 3% 20:31:37 DEBUG --- stderr --- 20:31:37 DEBUG 20:32:33 INFO 20:32:33 INFO [loop_until]: kubectl --namespace=xlou top pods 20:32:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:32:33 INFO [loop_until]: OK (rc = 0) 20:32:33 DEBUG --- stdout --- 20:32:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4611Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 5m 385Mi ds-cts-2 6m 375Mi ds-idrepo-0 12m 13797Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 9m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6m 3903Mi idm-65858d8c4c-8ff69 7m 3969Mi lodemon-56989b88bb-nm2fw 1m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 11m 107Mi 20:32:33 DEBUG --- stderr --- 20:32:33 DEBUG 20:32:37 INFO 20:32:37 INFO [loop_until]: kubectl --namespace=xlou top node 20:32:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:32:37 INFO [loop_until]: OK (rc = 0) 20:32:37 DEBUG --- stdout --- 20:32:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1267Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5574Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5717Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 5241Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 4848Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14394Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 14156Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14232Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 75m 0% 1641Mi 2% 20:32:37 DEBUG --- stderr --- 20:32:37 DEBUG 127.0.0.1 - - [11/Aug/2023 20:33:30] "GET /monitoring/average?start_time=23-08-11_19:03:04&stop_time=23-08-11_19:31:29 HTTP/1.1" 200 - 20:33:34 INFO 20:33:34 INFO [loop_until]: kubectl --namespace=xlou top pods 20:33:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:33:34 INFO [loop_until]: OK (rc = 0) 20:33:34 DEBUG --- stdout --- 20:33:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4611Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 5m 415Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 10m 13798Mi ds-idrepo-1 8m 13636Mi ds-idrepo-2 9m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 5m 3903Mi idm-65858d8c4c-8ff69 6m 3969Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 107Mi 20:33:34 DEBUG --- stderr --- 20:33:34 DEBUG 20:33:37 INFO 20:33:37 INFO [loop_until]: kubectl --namespace=xlou top node 20:33:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:33:37 INFO [loop_until]: OK (rc = 0) 20:33:37 DEBUG --- stdout --- 20:33:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5574Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5716Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5687Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 5243Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 4847Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14394Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 14156Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14231Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 71m 0% 1639Mi 2% 20:33:37 DEBUG --- stderr --- 20:33:37 DEBUG 20:34:34 INFO 20:34:34 INFO [loop_until]: kubectl --namespace=xlou top pods 20:34:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:34:34 INFO [loop_until]: OK (rc = 0) 20:34:34 DEBUG --- stdout --- 20:34:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 5m 4640Mi ds-cts-0 5m 414Mi ds-cts-1 6m 384Mi ds-cts-2 6m 376Mi ds-idrepo-0 2008m 13800Mi ds-idrepo-1 8m 13635Mi ds-idrepo-2 12m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 2632m 3908Mi idm-65858d8c4c-8ff69 2403m 3972Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 573m 508Mi 20:34:34 DEBUG --- stderr --- 20:34:34 DEBUG 20:34:37 INFO 20:34:37 INFO [loop_until]: kubectl --namespace=xlou top node 20:34:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:34:37 INFO [loop_until]: OK (rc = 0) 20:34:37 DEBUG --- stdout --- 20:34:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 5576Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5716Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5687Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2969m 18% 5242Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 343m 2% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3215m 20% 4852Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3781m 23% 14403Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14159Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14231Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 647m 4% 2030Mi 3% 20:34:37 DEBUG --- stderr --- 20:34:37 DEBUG 20:35:34 INFO 20:35:34 INFO [loop_until]: kubectl --namespace=xlou top pods 20:35:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:35:34 INFO [loop_until]: OK (rc = 0) 20:35:34 DEBUG --- stdout --- 20:35:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 6m 415Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 3184m 13803Mi ds-idrepo-1 1952m 13636Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 5279m 5125Mi idm-65858d8c4c-8ff69 4115m 4117Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 454m 597Mi 20:35:34 DEBUG --- stderr --- 20:35:34 DEBUG 20:35:37 INFO 20:35:37 INFO [loop_until]: kubectl --namespace=xlou top node 20:35:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:35:37 INFO [loop_until]: OK (rc = 0) 20:35:37 DEBUG --- stdout --- 20:35:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5575Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5688Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 4202m 26% 5384Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 408m 2% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5250m 33% 6057Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2906m 18% 14394Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 14160Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 2286m 14% 14229Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 544m 3% 2122Mi 3% 20:35:37 DEBUG --- stderr --- 20:35:37 DEBUG 20:36:34 INFO 20:36:34 INFO [loop_until]: kubectl --namespace=xlou top pods 20:36:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:36:34 INFO [loop_until]: OK (rc = 0) 20:36:34 DEBUG --- stdout --- 20:36:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 8m 4640Mi ds-cts-0 5m 415Mi ds-cts-1 5m 384Mi ds-cts-2 6m 377Mi ds-idrepo-0 3213m 13805Mi ds-idrepo-1 2033m 13635Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 5726m 5200Mi idm-65858d8c4c-8ff69 4212m 4203Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 352m 628Mi 20:36:34 DEBUG --- stderr --- 20:36:34 DEBUG 20:36:37 INFO 20:36:37 INFO [loop_until]: kubectl --namespace=xlou top node 20:36:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:36:37 INFO [loop_until]: OK (rc = 0) 20:36:38 DEBUG --- stdout --- 20:36:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5576Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5715Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5685Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 4057m 25% 5466Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 405m 2% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5309m 33% 6131Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2706m 17% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14160Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 2329m 14% 14229Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 428m 2% 2150Mi 3% 20:36:38 DEBUG --- stderr --- 20:36:38 DEBUG 20:37:34 INFO 20:37:34 INFO [loop_until]: kubectl --namespace=xlou top pods 20:37:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:37:34 INFO [loop_until]: OK (rc = 0) 20:37:34 DEBUG --- stdout --- 20:37:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 7m 414Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 2881m 13802Mi ds-idrepo-1 2248m 13819Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 5881m 5186Mi idm-65858d8c4c-8ff69 3618m 4208Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 338m 709Mi 20:37:34 DEBUG --- stderr --- 20:37:34 DEBUG 20:37:38 INFO 20:37:38 INFO [loop_until]: kubectl --namespace=xlou top node 20:37:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:37:38 INFO [loop_until]: OK (rc = 0) 20:37:38 DEBUG --- stdout --- 20:37:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5574Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3800m 23% 5474Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 415m 2% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5531m 34% 6118Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2764m 17% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14158Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 2577m 16% 14407Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 409m 2% 2234Mi 3% 20:37:38 DEBUG --- stderr --- 20:37:38 DEBUG 20:38:34 INFO 20:38:34 INFO [loop_until]: kubectl --namespace=xlou top pods 20:38:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:38:34 INFO [loop_until]: OK (rc = 0) 20:38:34 DEBUG --- stdout --- 20:38:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 6m 414Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 4478m 13805Mi ds-idrepo-1 10m 13819Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 4050m 5188Mi idm-65858d8c4c-8ff69 3658m 4218Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 316m 763Mi 20:38:34 DEBUG --- stderr --- 20:38:34 DEBUG 20:38:38 INFO 20:38:38 INFO [loop_until]: kubectl --namespace=xlou top node 20:38:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:38:38 INFO [loop_until]: OK (rc = 0) 20:38:38 DEBUG --- stdout --- 20:38:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5715Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3767m 23% 5484Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 403m 2% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4002m 25% 6123Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4372m 27% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14160Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14406Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 389m 2% 2287Mi 3% 20:38:38 DEBUG --- stderr --- 20:38:38 DEBUG 20:39:34 INFO 20:39:34 INFO [loop_until]: kubectl --namespace=xlou top pods 20:39:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:39:34 INFO [loop_until]: OK (rc = 0) 20:39:34 DEBUG --- stdout --- 20:39:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 5m 4640Mi ds-cts-0 5m 414Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 4748m 13822Mi ds-idrepo-1 1489m 13723Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6540m 5193Mi idm-65858d8c4c-8ff69 3365m 4247Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 334m 825Mi 20:39:34 DEBUG --- stderr --- 20:39:34 DEBUG 20:39:38 INFO 20:39:38 INFO [loop_until]: kubectl --namespace=xlou top node 20:39:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:39:38 INFO [loop_until]: OK (rc = 0) 20:39:38 DEBUG --- stdout --- 20:39:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 5569Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5685Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3668m 23% 5511Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 416m 2% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6361m 40% 6121Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4759m 29% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14159Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 400m 2% 14309Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 411m 2% 2352Mi 4% 20:39:38 DEBUG --- stderr --- 20:39:38 DEBUG 20:40:34 INFO 20:40:34 INFO [loop_until]: kubectl --namespace=xlou top pods 20:40:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:40:34 INFO [loop_until]: OK (rc = 0) 20:40:34 DEBUG --- stdout --- 20:40:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 7m 4640Mi ds-cts-0 5m 415Mi ds-cts-1 6m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 2524m 13827Mi ds-idrepo-1 1670m 13796Mi ds-idrepo-2 12m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6531m 5196Mi idm-65858d8c4c-8ff69 3688m 4255Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 341m 825Mi 20:40:34 DEBUG --- stderr --- 20:40:34 DEBUG 20:40:38 INFO 20:40:38 INFO [loop_until]: kubectl --namespace=xlou top node 20:40:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:40:38 INFO [loop_until]: OK (rc = 0) 20:40:38 DEBUG --- stdout --- 20:40:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5570Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3818m 24% 5523Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 422m 2% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6423m 40% 6127Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2689m 16% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14161Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 2936m 18% 14386Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 398m 2% 2351Mi 4% 20:40:38 DEBUG --- stderr --- 20:40:38 DEBUG 20:41:34 INFO 20:41:34 INFO [loop_until]: kubectl --namespace=xlou top pods 20:41:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:41:34 INFO [loop_until]: OK (rc = 0) 20:41:34 DEBUG --- stdout --- 20:41:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 7m 4640Mi ds-cts-0 5m 414Mi ds-cts-1 5m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 4238m 13823Mi ds-idrepo-1 9m 13800Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3868m 5197Mi idm-65858d8c4c-8ff69 3679m 4267Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 295m 826Mi 20:41:34 DEBUG --- stderr --- 20:41:34 DEBUG 20:41:38 INFO 20:41:38 INFO [loop_until]: kubectl --namespace=xlou top node 20:41:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:41:38 INFO [loop_until]: OK (rc = 0) 20:41:38 DEBUG --- stdout --- 20:41:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5571Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3507m 22% 5524Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 401m 2% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4121m 25% 6124Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4353m 27% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14161Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14385Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 373m 2% 2359Mi 4% 20:41:38 DEBUG --- stderr --- 20:41:38 DEBUG 20:42:35 INFO 20:42:35 INFO [loop_until]: kubectl --namespace=xlou top pods 20:42:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:42:35 INFO [loop_until]: OK (rc = 0) 20:42:35 DEBUG --- stdout --- 20:42:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 6m 415Mi ds-cts-1 5m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 4771m 13804Mi ds-idrepo-1 16m 13802Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 5070m 5198Mi idm-65858d8c4c-8ff69 3364m 4315Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 328m 900Mi 20:42:35 DEBUG --- stderr --- 20:42:35 DEBUG 20:42:38 INFO 20:42:38 INFO [loop_until]: kubectl --namespace=xlou top node 20:42:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:42:38 INFO [loop_until]: OK (rc = 0) 20:42:38 DEBUG --- stdout --- 20:42:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1261Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5572Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5726Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5688Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3528m 22% 5580Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 408m 2% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5156m 32% 6129Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4875m 30% 14426Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14162Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14387Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 406m 2% 2426Mi 4% 20:42:38 DEBUG --- stderr --- 20:42:38 DEBUG 20:43:35 INFO 20:43:35 INFO [loop_until]: kubectl --namespace=xlou top pods 20:43:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:43:35 INFO [loop_until]: OK (rc = 0) 20:43:35 DEBUG --- stdout --- 20:43:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 5m 415Mi ds-cts-1 4m 384Mi ds-cts-2 5m 376Mi ds-idrepo-0 3093m 13823Mi ds-idrepo-1 2151m 13808Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6026m 5199Mi idm-65858d8c4c-8ff69 4023m 4419Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 368m 901Mi 20:43:35 DEBUG --- stderr --- 20:43:35 DEBUG 20:43:38 INFO 20:43:38 INFO [loop_until]: kubectl --namespace=xlou top node 20:43:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:43:38 INFO [loop_until]: OK (rc = 0) 20:43:38 DEBUG --- stdout --- 20:43:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5716Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5688Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 4190m 26% 5682Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 418m 2% 2127Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6291m 39% 6127Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2701m 16% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14163Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 3003m 18% 14397Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 398m 2% 2429Mi 4% 20:43:38 DEBUG --- stderr --- 20:43:38 DEBUG 20:44:35 INFO 20:44:35 INFO [loop_until]: kubectl --namespace=xlou top pods 20:44:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:44:35 INFO [loop_until]: OK (rc = 0) 20:44:35 DEBUG --- stdout --- 20:44:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 5m 415Mi ds-cts-1 5m 384Mi ds-cts-2 5m 376Mi ds-idrepo-0 4805m 13820Mi ds-idrepo-1 9m 13809Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 4394m 5199Mi idm-65858d8c4c-8ff69 3719m 4429Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 293m 902Mi 20:44:35 DEBUG --- stderr --- 20:44:35 DEBUG 20:44:38 INFO 20:44:38 INFO [loop_until]: kubectl --namespace=xlou top node 20:44:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:44:38 INFO [loop_until]: OK (rc = 0) 20:44:38 DEBUG --- stdout --- 20:44:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1276Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5715Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3526m 22% 5690Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 412m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4568m 28% 6128Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4720m 29% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14161Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 108m 0% 14396Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 367m 2% 2431Mi 4% 20:44:38 DEBUG --- stderr --- 20:44:38 DEBUG 20:45:35 INFO 20:45:35 INFO [loop_until]: kubectl --namespace=xlou top pods 20:45:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:45:35 INFO [loop_until]: OK (rc = 0) 20:45:35 DEBUG --- stdout --- 20:45:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 7m 4641Mi ds-cts-0 5m 414Mi ds-cts-1 5m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 4428m 13823Mi ds-idrepo-1 76m 13811Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 4224m 5201Mi idm-65858d8c4c-8ff69 3470m 4432Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 318m 901Mi 20:45:35 DEBUG --- stderr --- 20:45:35 DEBUG 20:45:39 INFO 20:45:39 INFO [loop_until]: kubectl --namespace=xlou top node 20:45:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:45:39 INFO [loop_until]: OK (rc = 0) 20:45:39 DEBUG --- stdout --- 20:45:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1263Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5687Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3487m 21% 5694Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 395m 2% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4326m 27% 6129Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4525m 28% 14422Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 772m 4% 14397Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 373m 2% 2430Mi 4% 20:45:39 DEBUG --- stderr --- 20:45:39 DEBUG 20:46:35 INFO 20:46:35 INFO [loop_until]: kubectl --namespace=xlou top pods 20:46:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:46:35 INFO [loop_until]: OK (rc = 0) 20:46:35 DEBUG --- stdout --- 20:46:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 5m 414Mi ds-cts-1 5m 384Mi ds-cts-2 5m 376Mi ds-idrepo-0 3457m 13807Mi ds-idrepo-1 1594m 13811Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 5493m 5200Mi idm-65858d8c4c-8ff69 3517m 4538Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 336m 902Mi 20:46:35 DEBUG --- stderr --- 20:46:35 DEBUG 20:46:39 INFO 20:46:39 INFO [loop_until]: kubectl --namespace=xlou top node 20:46:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:46:39 INFO [loop_until]: OK (rc = 0) 20:46:39 DEBUG --- stdout --- 20:46:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5689Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 4086m 25% 5810Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 417m 2% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5837m 36% 6129Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3559m 22% 14425Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 2011m 12% 14399Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 405m 2% 2431Mi 4% 20:46:39 DEBUG --- stderr --- 20:46:39 DEBUG 20:47:35 INFO 20:47:35 INFO [loop_until]: kubectl --namespace=xlou top pods 20:47:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:47:35 INFO [loop_until]: OK (rc = 0) 20:47:35 DEBUG --- stdout --- 20:47:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4641Mi ds-cts-0 5m 415Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 2974m 13806Mi ds-idrepo-1 3011m 13798Mi ds-idrepo-2 18m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6295m 5200Mi idm-65858d8c4c-8ff69 4058m 4570Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 349m 902Mi 20:47:35 DEBUG --- stderr --- 20:47:35 DEBUG 20:47:39 INFO 20:47:39 INFO [loop_until]: kubectl --namespace=xlou top node 20:47:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:47:39 INFO [loop_until]: OK (rc = 0) 20:47:39 DEBUG --- stdout --- 20:47:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1262Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5572Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5688Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 4166m 26% 5833Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 424m 2% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5934m 37% 6130Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2837m 17% 14416Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 14157Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 2264m 14% 14392Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 405m 2% 2431Mi 4% 20:47:39 DEBUG --- stderr --- 20:47:39 DEBUG 20:48:35 INFO 20:48:35 INFO [loop_until]: kubectl --namespace=xlou top pods 20:48:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:48:35 INFO [loop_until]: OK (rc = 0) 20:48:35 DEBUG --- stdout --- 20:48:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4640Mi ds-cts-0 5m 415Mi ds-cts-1 7m 384Mi ds-cts-2 4m 375Mi ds-idrepo-0 4368m 13799Mi ds-idrepo-1 9m 13812Mi ds-idrepo-2 13m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3953m 5201Mi idm-65858d8c4c-8ff69 3540m 4574Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 296m 903Mi 20:48:35 DEBUG --- stderr --- 20:48:35 DEBUG 20:48:39 INFO 20:48:39 INFO [loop_until]: kubectl --namespace=xlou top node 20:48:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:48:39 INFO [loop_until]: OK (rc = 0) 20:48:39 DEBUG --- stdout --- 20:48:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5574Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5718Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5689Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3682m 23% 5840Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 403m 2% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4233m 26% 6133Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4363m 27% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14158Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 56m 0% 14401Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 375m 2% 2429Mi 4% 20:48:39 DEBUG --- stderr --- 20:48:39 DEBUG 20:49:35 INFO 20:49:35 INFO [loop_until]: kubectl --namespace=xlou top pods 20:49:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:49:35 INFO [loop_until]: OK (rc = 0) 20:49:35 DEBUG --- stdout --- 20:49:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 7m 4640Mi ds-cts-0 5m 415Mi ds-cts-1 5m 384Mi ds-cts-2 5m 376Mi ds-idrepo-0 4468m 13803Mi ds-idrepo-1 351m 13816Mi ds-idrepo-2 16m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 4129m 5204Mi idm-65858d8c4c-8ff69 3681m 4667Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 314m 904Mi 20:49:35 DEBUG --- stderr --- 20:49:35 DEBUG 20:49:39 INFO 20:49:39 INFO [loop_until]: kubectl --namespace=xlou top node 20:49:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:49:39 INFO [loop_until]: OK (rc = 0) 20:49:39 DEBUG --- stdout --- 20:49:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1268Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5575Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5717Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5685Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3752m 23% 5920Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 410m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4161m 26% 6133Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4639m 29% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14160Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 55m 0% 14406Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 386m 2% 2432Mi 4% 20:49:39 DEBUG --- stderr --- 20:49:39 DEBUG 20:50:35 INFO 20:50:35 INFO [loop_until]: kubectl --namespace=xlou top pods 20:50:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:50:35 INFO [loop_until]: OK (rc = 0) 20:50:35 DEBUG --- stdout --- 20:50:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4620Mi ds-cts-0 5m 414Mi ds-cts-1 7m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 4648m 13802Mi ds-idrepo-1 653m 13785Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 4220m 5204Mi idm-65858d8c4c-8ff69 3496m 4680Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 291m 911Mi 20:50:35 DEBUG --- stderr --- 20:50:35 DEBUG 20:50:39 INFO 20:50:39 INFO [loop_until]: kubectl --namespace=xlou top node 20:50:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:50:39 INFO [loop_until]: OK (rc = 0) 20:50:39 DEBUG --- stdout --- 20:50:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5571Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5693Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5685Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3529m 22% 5942Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 407m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4507m 28% 6133Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4542m 28% 14422Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 14162Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 533m 3% 14372Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 366m 2% 2437Mi 4% 20:50:39 DEBUG --- stderr --- 20:50:39 DEBUG 20:51:35 INFO 20:51:35 INFO [loop_until]: kubectl --namespace=xlou top pods 20:51:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:51:35 INFO [loop_until]: OK (rc = 0) 20:51:35 DEBUG --- stdout --- 20:51:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 7m 4621Mi ds-cts-0 5m 415Mi ds-cts-1 5m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 3271m 13808Mi ds-idrepo-1 1287m 13788Mi ds-idrepo-2 13m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6553m 5203Mi idm-65858d8c4c-8ff69 3371m 4762Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 297m 911Mi 20:51:35 DEBUG --- stderr --- 20:51:35 DEBUG 20:51:39 INFO 20:51:39 INFO [loop_until]: kubectl --namespace=xlou top node 20:51:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:51:39 INFO [loop_until]: OK (rc = 0) 20:51:39 DEBUG --- stdout --- 20:51:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1268Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5572Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5697Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5685Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3730m 23% 6023Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 424m 2% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5184m 32% 6132Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3262m 20% 14423Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14162Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 1936m 12% 14377Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 380m 2% 2438Mi 4% 20:51:39 DEBUG --- stderr --- 20:51:39 DEBUG 20:52:36 INFO 20:52:36 INFO [loop_until]: kubectl --namespace=xlou top pods 20:52:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:52:36 INFO [loop_until]: OK (rc = 0) 20:52:36 DEBUG --- stdout --- 20:52:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4621Mi ds-cts-0 6m 415Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 4480m 13801Mi ds-idrepo-1 13m 13788Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3875m 5205Mi idm-65858d8c4c-8ff69 3705m 4819Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 277m 913Mi 20:52:36 DEBUG --- stderr --- 20:52:36 DEBUG 20:52:39 INFO 20:52:39 INFO [loop_until]: kubectl --namespace=xlou top node 20:52:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:52:40 INFO [loop_until]: OK (rc = 0) 20:52:40 DEBUG --- stdout --- 20:52:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5572Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5694Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5687Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 4150m 26% 6162Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 411m 2% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4889m 30% 6133Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4488m 28% 14426Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14166Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14374Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 361m 2% 2439Mi 4% 20:52:40 DEBUG --- stderr --- 20:52:40 DEBUG 20:53:36 INFO 20:53:36 INFO [loop_until]: kubectl --namespace=xlou top pods 20:53:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:53:36 INFO [loop_until]: OK (rc = 0) 20:53:36 DEBUG --- stdout --- 20:53:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4621Mi ds-cts-0 6m 415Mi ds-cts-1 5m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 4119m 13801Mi ds-idrepo-1 9m 13788Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 4Mi idm-65858d8c4c-5kwbg 4304m 5205Mi idm-65858d8c4c-8ff69 3430m 4973Mi lodemon-56989b88bb-nm2fw 1m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 287m 912Mi 20:53:36 DEBUG --- stderr --- 20:53:36 DEBUG 20:53:40 INFO 20:53:40 INFO [loop_until]: kubectl --namespace=xlou top node 20:53:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:53:40 INFO [loop_until]: OK (rc = 0) 20:53:40 DEBUG --- stdout --- 20:53:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1267Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5696Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5689Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3370m 21% 6237Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 409m 2% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4434m 27% 6134Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4483m 28% 14427Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14161Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14384Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 357m 2% 2440Mi 4% 20:53:40 DEBUG --- stderr --- 20:53:40 DEBUG 20:54:36 INFO 20:54:36 INFO [loop_until]: kubectl --namespace=xlou top pods 20:54:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:54:36 INFO [loop_until]: OK (rc = 0) 20:54:36 DEBUG --- stdout --- 20:54:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 7m 4621Mi ds-cts-0 6m 415Mi ds-cts-1 5m 384Mi ds-cts-2 6m 376Mi ds-idrepo-0 2767m 13808Mi ds-idrepo-1 2265m 13813Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 5871m 5206Mi idm-65858d8c4c-8ff69 4014m 4998Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 318m 911Mi 20:54:36 DEBUG --- stderr --- 20:54:36 DEBUG 20:54:40 INFO 20:54:40 INFO [loop_until]: kubectl --namespace=xlou top node 20:54:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:54:40 INFO [loop_until]: OK (rc = 0) 20:54:40 DEBUG --- stdout --- 20:54:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5570Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5698Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5690Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 4210m 26% 6261Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 433m 2% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6048m 38% 6131Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3371m 21% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14160Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 2266m 14% 14403Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 405m 2% 2450Mi 4% 20:54:40 DEBUG --- stderr --- 20:54:40 DEBUG 20:55:36 INFO 20:55:36 INFO [loop_until]: kubectl --namespace=xlou top pods 20:55:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:55:36 INFO [loop_until]: OK (rc = 0) 20:55:36 DEBUG --- stdout --- 20:55:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4612Mi am-55f77847b7-ngpns 7m 4633Mi am-55f77847b7-q6zcv 6m 4621Mi ds-cts-0 6m 415Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 2553m 13797Mi ds-idrepo-1 1959m 13817Mi ds-idrepo-2 10m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 4998m 5205Mi idm-65858d8c4c-8ff69 4168m 5010Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 310m 912Mi 20:55:36 DEBUG --- stderr --- 20:55:36 DEBUG 20:55:40 INFO 20:55:40 INFO [loop_until]: kubectl --namespace=xlou top node 20:55:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:55:40 INFO [loop_until]: OK (rc = 0) 20:55:40 DEBUG --- stdout --- 20:55:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5697Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5688Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 4292m 27% 6272Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 424m 2% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5360m 33% 6133Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3154m 19% 14398Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 14163Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 2202m 13% 14407Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 394m 2% 2439Mi 4% 20:55:40 DEBUG --- stderr --- 20:55:40 DEBUG 20:56:36 INFO 20:56:36 INFO [loop_until]: kubectl --namespace=xlou top pods 20:56:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:56:36 INFO [loop_until]: OK (rc = 0) 20:56:36 DEBUG --- stdout --- 20:56:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4621Mi ds-cts-0 6m 415Mi ds-cts-1 5m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 4421m 13798Mi ds-idrepo-1 10m 13816Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3858m 5207Mi idm-65858d8c4c-8ff69 3786m 5011Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 280m 913Mi 20:56:36 DEBUG --- stderr --- 20:56:36 DEBUG 20:56:40 INFO 20:56:40 INFO [loop_until]: kubectl --namespace=xlou top node 20:56:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:56:40 INFO [loop_until]: OK (rc = 0) 20:56:40 DEBUG --- stdout --- 20:56:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5575Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5695Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3936m 24% 6273Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 392m 2% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3932m 24% 6132Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4356m 27% 14401Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 14161Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14405Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 350m 2% 2439Mi 4% 20:56:40 DEBUG --- stderr --- 20:56:40 DEBUG 20:57:36 INFO 20:57:36 INFO [loop_until]: kubectl --namespace=xlou top pods 20:57:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:57:36 INFO [loop_until]: OK (rc = 0) 20:57:36 DEBUG --- stdout --- 20:57:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4621Mi ds-cts-0 6m 415Mi ds-cts-1 5m 386Mi ds-cts-2 5m 375Mi ds-idrepo-0 4337m 13822Mi ds-idrepo-1 330m 13788Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 4305m 5208Mi idm-65858d8c4c-8ff69 3215m 5011Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 279m 912Mi 20:57:36 DEBUG --- stderr --- 20:57:36 DEBUG 20:57:40 INFO 20:57:40 INFO [loop_until]: kubectl --namespace=xlou top node 20:57:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:57:40 INFO [loop_until]: OK (rc = 0) 20:57:40 DEBUG --- stdout --- 20:57:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5699Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5687Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3585m 22% 6269Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 418m 2% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4446m 27% 6136Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4167m 26% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14163Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 502m 3% 14377Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 351m 2% 2439Mi 4% 20:57:40 DEBUG --- stderr --- 20:57:40 DEBUG 20:58:36 INFO 20:58:36 INFO [loop_until]: kubectl --namespace=xlou top pods 20:58:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:58:36 INFO [loop_until]: OK (rc = 0) 20:58:36 DEBUG --- stdout --- 20:58:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4621Mi ds-cts-0 5m 415Mi ds-cts-1 5m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 3190m 13807Mi ds-idrepo-1 1992m 13818Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 6556m 5211Mi idm-65858d8c4c-8ff69 3822m 5010Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 316m 912Mi 20:58:36 DEBUG --- stderr --- 20:58:36 DEBUG 20:58:40 INFO 20:58:40 INFO [loop_until]: kubectl --namespace=xlou top node 20:58:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:58:41 INFO [loop_until]: OK (rc = 0) 20:58:41 DEBUG --- stdout --- 20:58:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1268Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5700Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5688Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3883m 24% 6266Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 414m 2% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7833m 49% 6140Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3056m 19% 14411Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 14165Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 2489m 15% 14406Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 401m 2% 2439Mi 4% 20:58:41 DEBUG --- stderr --- 20:58:41 DEBUG 20:59:36 INFO 20:59:36 INFO [loop_until]: kubectl --namespace=xlou top pods 20:59:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:59:36 INFO [loop_until]: OK (rc = 0) 20:59:36 DEBUG --- stdout --- 20:59:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 6m 4612Mi am-55f77847b7-ngpns 6m 4633Mi am-55f77847b7-q6zcv 6m 4621Mi ds-cts-0 5m 414Mi ds-cts-1 5m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 4289m 13805Mi ds-idrepo-1 10m 13817Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 3862m 5213Mi idm-65858d8c4c-8ff69 3444m 5012Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 277m 913Mi 20:59:36 DEBUG --- stderr --- 20:59:36 DEBUG 20:59:41 INFO 20:59:41 INFO [loop_until]: kubectl --namespace=xlou top node 20:59:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:59:41 INFO [loop_until]: OK (rc = 0) 20:59:41 DEBUG --- stdout --- 20:59:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1268Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5572Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5697Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3609m 22% 6272Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 401m 2% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4165m 26% 6141Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4477m 28% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14166Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14403Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 335m 2% 2440Mi 4% 20:59:41 DEBUG --- stderr --- 20:59:41 DEBUG 21:00:36 INFO 21:00:36 INFO [loop_until]: kubectl --namespace=xlou top pods 21:00:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:00:36 INFO [loop_until]: OK (rc = 0) 21:00:36 DEBUG --- stdout --- 21:00:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 18m 4609Mi am-55f77847b7-ngpns 6m 4634Mi am-55f77847b7-q6zcv 7m 4621Mi ds-cts-0 5m 415Mi ds-cts-1 5m 384Mi ds-cts-2 6m 375Mi ds-idrepo-0 4027m 13799Mi ds-idrepo-1 9m 13817Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 4586m 5214Mi idm-65858d8c4c-8ff69 3489m 5012Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 295m 913Mi 21:00:36 DEBUG --- stderr --- 21:00:36 DEBUG 21:00:41 INFO 21:00:41 INFO [loop_until]: kubectl --namespace=xlou top node 21:00:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:00:41 INFO [loop_until]: OK (rc = 0) 21:00:41 DEBUG --- stdout --- 21:00:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 5569Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5697Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5685Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3563m 22% 6272Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 407m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4846m 30% 6143Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3981m 25% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 493m 3% 14405Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 364m 2% 2441Mi 4% 21:00:41 DEBUG --- stderr --- 21:00:41 DEBUG 21:01:36 INFO 21:01:36 INFO [loop_until]: kubectl --namespace=xlou top pods 21:01:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:01:36 INFO [loop_until]: OK (rc = 0) 21:01:36 DEBUG --- stdout --- 21:01:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4609Mi am-55f77847b7-ngpns 6m 4634Mi am-55f77847b7-q6zcv 6m 4621Mi ds-cts-0 6m 416Mi ds-cts-1 5m 384Mi ds-cts-2 6m 376Mi ds-idrepo-0 4462m 13805Mi ds-idrepo-1 9m 13817Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 4087m 5215Mi idm-65858d8c4c-8ff69 3615m 5012Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 294m 913Mi 21:01:36 DEBUG --- stderr --- 21:01:36 DEBUG 21:01:41 INFO 21:01:41 INFO [loop_until]: kubectl --namespace=xlou top node 21:01:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:01:41 INFO [loop_until]: OK (rc = 0) 21:01:41 DEBUG --- stdout --- 21:01:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5570Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5694Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5689Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3658m 23% 6270Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 402m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4216m 26% 6140Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4402m 27% 14428Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 378m 2% 14406Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 361m 2% 2442Mi 4% 21:01:41 DEBUG --- stderr --- 21:01:41 DEBUG 21:02:36 INFO 21:02:36 INFO [loop_until]: kubectl --namespace=xlou top pods 21:02:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:02:37 INFO [loop_until]: OK (rc = 0) 21:02:37 DEBUG --- stdout --- 21:02:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4609Mi am-55f77847b7-ngpns 6m 4634Mi am-55f77847b7-q6zcv 6m 4621Mi ds-cts-0 6m 415Mi ds-cts-1 5m 384Mi ds-cts-2 6m 376Mi ds-idrepo-0 3073m 13801Mi ds-idrepo-1 1751m 13818Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 5597m 5214Mi idm-65858d8c4c-8ff69 3613m 5012Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 325m 913Mi 21:02:37 DEBUG --- stderr --- 21:02:37 DEBUG 21:02:41 INFO 21:02:41 INFO [loop_until]: kubectl --namespace=xlou top node 21:02:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:02:41 INFO [loop_until]: OK (rc = 0) 21:02:41 DEBUG --- stdout --- 21:02:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1268Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5568Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5693Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5689Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3506m 22% 6270Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 411m 2% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6381m 40% 6137Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3433m 21% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14167Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 1929m 12% 14407Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 379m 2% 2439Mi 4% 21:02:41 DEBUG --- stderr --- 21:02:41 DEBUG 21:03:37 INFO 21:03:37 INFO [loop_until]: kubectl --namespace=xlou top pods 21:03:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:03:37 INFO [loop_until]: OK (rc = 0) 21:03:37 DEBUG --- stdout --- 21:03:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4609Mi am-55f77847b7-ngpns 6m 4634Mi am-55f77847b7-q6zcv 6m 4621Mi ds-cts-0 6m 415Mi ds-cts-1 5m 384Mi ds-cts-2 6m 376Mi ds-idrepo-0 2915m 13800Mi ds-idrepo-1 2224m 13818Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 5391m 5215Mi idm-65858d8c4c-8ff69 3608m 5013Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 304m 913Mi 21:03:37 DEBUG --- stderr --- 21:03:37 DEBUG 21:03:41 INFO 21:03:41 INFO [loop_until]: kubectl --namespace=xlou top node 21:03:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:03:41 INFO [loop_until]: OK (rc = 0) 21:03:41 DEBUG --- stdout --- 21:03:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1267Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5571Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5691Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5685Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3744m 23% 6273Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 417m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5369m 33% 6153Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3088m 19% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 2244m 14% 14408Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 384m 2% 2441Mi 4% 21:03:41 DEBUG --- stderr --- 21:03:41 DEBUG 21:04:37 INFO 21:04:37 INFO [loop_until]: kubectl --namespace=xlou top pods 21:04:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:04:37 INFO [loop_until]: OK (rc = 0) 21:04:37 DEBUG --- stdout --- 21:04:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 8m 4609Mi am-55f77847b7-ngpns 6m 4634Mi am-55f77847b7-q6zcv 7m 4621Mi ds-cts-0 6m 415Mi ds-cts-1 5m 385Mi ds-cts-2 5m 376Mi ds-idrepo-0 11m 13807Mi ds-idrepo-1 10m 13794Mi ds-idrepo-2 9m 13569Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 9m 5212Mi idm-65858d8c4c-8ff69 6m 5012Mi lodemon-56989b88bb-nm2fw 1m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 34m 109Mi 21:04:37 DEBUG --- stderr --- 21:04:37 DEBUG 21:04:41 INFO 21:04:41 INFO [loop_until]: kubectl --namespace=xlou top node 21:04:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:04:41 INFO [loop_until]: OK (rc = 0) 21:04:41 DEBUG --- stdout --- 21:04:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1265Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5568Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5695Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 6272Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 135m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 6141Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14414Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 14165Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14379Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 98m 0% 1643Mi 2% 21:04:41 DEBUG --- stderr --- 21:04:41 DEBUG 21:05:37 INFO 21:05:37 INFO [loop_until]: kubectl --namespace=xlou top pods 21:05:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:05:37 INFO [loop_until]: OK (rc = 0) 21:05:37 DEBUG --- stdout --- 21:05:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4609Mi am-55f77847b7-ngpns 6m 4634Mi am-55f77847b7-q6zcv 6m 4621Mi ds-cts-0 5m 415Mi ds-cts-1 6m 384Mi ds-cts-2 5m 375Mi ds-idrepo-0 9m 13807Mi ds-idrepo-1 10m 13794Mi ds-idrepo-2 11m 13568Mi end-user-ui-6845bc78c7-m5k2c 1m 3Mi idm-65858d8c4c-5kwbg 7m 5212Mi idm-65858d8c4c-8ff69 5m 5011Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 109Mi 21:05:37 DEBUG --- stderr --- 21:05:37 DEBUG 21:05:41 INFO 21:05:41 INFO [loop_until]: kubectl --namespace=xlou top node 21:05:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:05:41 INFO [loop_until]: OK (rc = 0) 21:05:41 DEBUG --- stdout --- 21:05:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1264Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5570Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5696Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5688Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 6268Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 140m 0% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 6143Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14415Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14383Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 71m 0% 1645Mi 2% 21:05:41 DEBUG --- stderr --- 21:05:41 DEBUG 127.0.0.1 - - [11/Aug/2023 21:05:49] "GET /monitoring/average?start_time=23-08-11_19:35:30&stop_time=23-08-11_20:03:48 HTTP/1.1" 200 - 21:06:37 INFO 21:06:37 INFO [loop_until]: kubectl --namespace=xlou top pods 21:06:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:06:37 INFO [loop_until]: OK (rc = 0) 21:06:37 DEBUG --- stdout --- 21:06:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 4Mi am-55f77847b7-7qk7g 7m 4609Mi am-55f77847b7-ngpns 9m 4634Mi am-55f77847b7-q6zcv 6m 4621Mi ds-cts-0 58m 492Mi ds-cts-1 5m 385Mi ds-cts-2 6m 376Mi ds-idrepo-0 9m 13807Mi ds-idrepo-1 83m 13751Mi ds-idrepo-2 90m 13535Mi end-user-ui-6845bc78c7-m5k2c 1m 5Mi idm-65858d8c4c-5kwbg 7m 5212Mi idm-65858d8c4c-8ff69 6m 5011Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 3Mi overseer-0-5fcfb8f45c-v6ck5 1m 109Mi 21:06:37 DEBUG --- stderr --- 21:06:37 DEBUG 21:06:41 INFO 21:06:41 INFO [loop_until]: kubectl --namespace=xlou top node 21:06:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:06:41 INFO [loop_until]: OK (rc = 0) 21:06:41 DEBUG --- stdout --- 21:06:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 85m 0% 1266Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5572Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 5696Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 5690Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 6270Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 161m 1% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 6142Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 125m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 101m 0% 1194Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 124m 0% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14162Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 118m 0% 14338Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 628m 3% 1837Mi 3% 21:06:41 DEBUG --- stderr --- 21:06:41 DEBUG 21:07:37 INFO 21:07:37 INFO [loop_until]: kubectl --namespace=xlou top pods 21:07:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:07:37 INFO [loop_until]: OK (rc = 0) 21:07:37 DEBUG --- stdout --- 21:07:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 5Mi am-55f77847b7-7qk7g 7m 4609Mi am-55f77847b7-ngpns 8m 4683Mi am-55f77847b7-q6zcv 9m 4621Mi ds-cts-0 5m 415Mi ds-cts-1 4m 384Mi ds-cts-2 5m 376Mi ds-idrepo-0 83m 13756Mi ds-idrepo-1 10m 13742Mi ds-idrepo-2 73m 13493Mi end-user-ui-6845bc78c7-m5k2c 1m 5Mi idm-65858d8c4c-5kwbg 8m 5212Mi idm-65858d8c4c-8ff69 5m 5011Mi lodemon-56989b88bb-nm2fw 3m 68Mi login-ui-74d6fb46c-2qx2r 1m 4Mi overseer-0-5fcfb8f45c-v6ck5 570m 118Mi 21:07:37 DEBUG --- stderr --- 21:07:37 DEBUG 21:07:42 INFO 21:07:42 INFO [loop_until]: kubectl --namespace=xlou top node 21:07:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:07:42 INFO [loop_until]: OK (rc = 0) 21:07:42 DEBUG --- stdout --- 21:07:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1272Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5571Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 5695Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5737Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 6271Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 148m 0% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 6140Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 195m 1% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 118m 0% 14092Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 121m 0% 14339Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 614m 3% 1665Mi 2% 21:07:42 DEBUG --- stderr --- 21:07:42 DEBUG 21:08:37 INFO 21:08:37 INFO [loop_until]: kubectl --namespace=xlou top pods 21:08:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:08:37 INFO [loop_until]: OK (rc = 0) 21:08:37 DEBUG --- stdout --- 21:08:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 5Mi am-55f77847b7-7qk7g 8m 4609Mi am-55f77847b7-ngpns 8m 4683Mi am-55f77847b7-q6zcv 6m 4621Mi ds-cts-0 5m 415Mi ds-cts-1 4m 384Mi ds-cts-2 6m 376Mi ds-idrepo-0 11m 13755Mi ds-idrepo-1 11m 13742Mi ds-idrepo-2 9m 13493Mi end-user-ui-6845bc78c7-m5k2c 1m 5Mi idm-65858d8c4c-5kwbg 7m 5212Mi idm-65858d8c4c-8ff69 5m 5011Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 4Mi overseer-0-5fcfb8f45c-v6ck5 599m 158Mi 21:08:37 DEBUG --- stderr --- 21:08:37 DEBUG 21:08:42 INFO 21:08:42 INFO [loop_until]: kubectl --namespace=xlou top node 21:08:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:08:42 INFO [loop_until]: OK (rc = 0) 21:08:42 DEBUG --- stdout --- 21:08:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1270Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5571Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5698Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5737Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 67m 0% 6272Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 136m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 6142Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 51m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 14093Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14333Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 676m 4% 1692Mi 2% 21:08:42 DEBUG --- stderr --- 21:08:42 DEBUG 21:09:37 INFO 21:09:37 INFO [loop_until]: kubectl --namespace=xlou top pods 21:09:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:09:37 INFO [loop_until]: OK (rc = 0) 21:09:37 DEBUG --- stdout --- 21:09:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-4p7xb 1m 5Mi am-55f77847b7-7qk7g 7m 4609Mi am-55f77847b7-ngpns 7m 4683Mi am-55f77847b7-q6zcv 6m 4621Mi ds-cts-0 6m 416Mi ds-cts-1 8m 384Mi ds-cts-2 5m 376Mi ds-idrepo-0 9m 13756Mi ds-idrepo-1 10m 13742Mi ds-idrepo-2 9m 13493Mi end-user-ui-6845bc78c7-m5k2c 1m 5Mi idm-65858d8c4c-5kwbg 7m 5212Mi idm-65858d8c4c-8ff69 5m 5011Mi lodemon-56989b88bb-nm2fw 2m 68Mi login-ui-74d6fb46c-2qx2r 1m 4Mi overseer-0-5fcfb8f45c-v6ck5 548m 201Mi 21:09:37 DEBUG --- stderr --- 21:09:37 DEBUG 21:09:42 INFO 21:09:42 INFO [loop_until]: kubectl --namespace=xlou top node 21:09:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:09:42 INFO [loop_until]: OK (rc = 0) 21:09:42 DEBUG --- stdout --- 21:09:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1268Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5572Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5698Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5734Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 6272Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 134m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 6144Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14102Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14334Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 817m 5% 1988Mi 3% 21:09:42 DEBUG --- stderr --- 21:09:42 DEBUG 21:10:09 INFO Finished: True 21:10:09 INFO Waiting for threads to register finish flag 21:10:42 INFO Done. Have a nice day! :) 127.0.0.1 - - [11/Aug/2023 21:10:42] "GET /monitoring/stop HTTP/1.1" 200 - 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/Cpu_cores_used_per_pod.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/Memory_usage_per_pod.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/Disk_tps_read_per_pod.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/Disk_tps_writes_per_pod.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/Cpu_cores_used_per_node.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/Memory_usage_used_per_node.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/Cpu_iowait_per_node.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/Network_receive_per_node.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/Network_transmit_per_node.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/am_cts_task_count_token_session.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/am_authentication_rate.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/ds_db_cache_misses_internal_nodes(backend=amCts).json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/ds_db_cache_misses_internal_nodes(backend=amIdentityStore).json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/ds_db_cache_misses_internal_nodes(backend=cfgStore).json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/ds_db_cache_misses_internal_nodes(backend=idmRepo).json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/am_authentication_count_per_pod.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/Cts_reaper_Deletion_count.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/AM_oauth2_authorization_codes.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/ds_backend_entries_deleted_amCts.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/ds_pods_replication_delay.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/am_cts_reaper_cache_size.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/am_cts_reaper_search_seconds_total.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/ds_replication_replica_replayed_updates_conflicts_resolved.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/node_disk_read_bytes_total.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/node_disk_written_bytes_total.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/ds_backend_entry_count.json does not exist. Skipping... 21:10:45 INFO File /tmp/lodemon_data-23-08-11_16:22:37/node_disk_io_time_seconds_total.json does not exist. Skipping... 127.0.0.1 - - [11/Aug/2023 21:10:47] "GET /monitoring/process HTTP/1.1" 200 -