--Task--
name: Monitoring_start
enabled: True
class_name: MonitoringStart
source_name: controller
source_namespace: >default<
target_name: controller
target_namespace: >default<
start: 0.0
stop: None
timeout: no timeout
loop: False
interval: None
dependencies: []
wait_for: []
preceding_task: None
options: {}
group_name: None
Current dir: /mnt/disk1/xslou/workshop/lodestar-fork/pyrock
task will be executed on controller (localhost)
2025-06-13 20:59:40 - INFO: Interval for this Task has changed to 2m (120 seconds)
2025-06-13 20:59:40 - INFO: interval was set to Task default because it was unset, based on self.real_duration value because task is NOT allowed to stop by itself
________________________________________________________________________________
[2025-06-13 20:59:40] Monitoring_start step1 : N/A
________________________________________________________________________________
2025-06-13 21:00:40,719 INFO
2025-06-13 21:00:40,720 INFO **************************************** Start Lodemon ****************************************
2025-06-13 21:00:40,800 INFO
2025-06-13 21:00:40,800 INFO [loop_until]: kubectl --namespace=xlou --context=gke_engineeringpit_us-east1-d_xlou-cdm delete configmap lodemon-config
2025-06-13 21:00:40,801 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0, 1]
2025-06-13 21:00:41,210 INFO [loop_until]: OK (rc = 0)
2025-06-13 21:00:41,210 DEBUG --- stdout ---
2025-06-13 21:00:41,210 DEBUG configmap "lodemon-config" deleted
2025-06-13 21:00:41,210 DEBUG --- stderr ---
2025-06-13 21:00:41,210 DEBUG
2025-06-13 21:00:41,210 INFO
2025-06-13 21:00:41,210 INFO [loop_until]: kubectl --namespace=xlou --context=gke_engineeringpit_us-east1-d_xlou-cdm create configmap lodemon-config --from-file=/mnt/disk1/xslou/workshop/lodestar-fork/config/config.yaml
2025-06-13 21:00:41,210 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0]
2025-06-13 21:00:41,512 INFO [loop_until]: OK (rc = 0)
2025-06-13 21:00:41,513 DEBUG --- stdout ---
2025-06-13 21:00:41,513 DEBUG configmap/lodemon-config created
2025-06-13 21:00:41,513 DEBUG --- stderr ---
2025-06-13 21:00:41,513 DEBUG
2025-06-13 21:00:41,513 INFO
2025-06-13 21:00:41,514 INFO [run_command]: gcloud iam service-accounts add-iam-policy-binding pit-test@engineeringpit.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member "serviceAccount:engineeringpit.svc.id.goog[xlou/k8s-svc-acct-lodemon]"
2025-06-13 21:00:43,203 ERROR [run_command]: ERROR
2025-06-13 21:00:43,203 ERROR --- rc ---
2025-06-13 21:00:43,203 ERROR returned 1, expected to be in [0]
2025-06-13 21:00:43,203 ERROR --- stdout ---
2025-06-13 21:00:43,203 ERROR
2025-06-13 21:00:43,203 ERROR --- stderr ---
2025-06-13 21:00:43,203 ERROR ERROR: (gcloud.iam.service-accounts.add-iam-policy-binding) PERMISSION_DENIED: Permission 'iam.serviceAccounts.setIamPolicy' denied on resource (or it may not exist). This command is authenticated as xiaosong.lou@pingidentity.com which is the active account specified by the [core/account] property.
- '@type': type.googleapis.com/google.rpc.ErrorInfo
domain: iam.googleapis.com
metadata:
permission: iam.serviceAccounts.setIamPolicy
reason: IAM_PERMISSION_DENIED
2025-06-13 21:00:43,203 INFO
2025-06-13 21:00:43,203 WARNING could not add the iam-policy binding to the GSA: pit-test@engineeringpit.iam.gserviceaccount.com for the member k8s-svc-acct-lodemon service account, Identity Cloud metrics will not be present in the Lodemon report.The exception thrown was: Command error: rc 1 -- stdout: -- stderr:ERROR: (gcloud.iam.service-accounts.add-iam-policy-binding) PERMISSION_DENIED: Permission 'iam.serviceAccounts.setIamPolicy' denied on resource (or it may not exist). This command is authenticated as xiaosong.lou@pingidentity.com which is the active account specified by the [core/account] property.
- '@type': type.googleapis.com/google.rpc.ErrorInfo
domain: iam.googleapis.com
metadata:
permission: iam.serviceAccounts.setIamPolicy
reason: IAM_PERMISSION_DENIED
2025-06-13 21:00:43,206 INFO
2025-06-13 21:00:43,206 INFO [loop_until]: kubectl --namespace=xlou --context=gke_engineeringpit_us-east1-d_xlou-cdm delete configmap lodemon-deployments
2025-06-13 21:00:43,206 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0, 1]
2025-06-13 21:00:43,608 INFO [loop_until]: OK (rc = 0)
2025-06-13 21:00:43,608 DEBUG --- stdout ---
2025-06-13 21:00:43,608 DEBUG configmap "lodemon-deployments" deleted
2025-06-13 21:00:43,608 DEBUG --- stderr ---
2025-06-13 21:00:43,608 DEBUG
2025-06-13 21:00:43,609 INFO
2025-06-13 21:00:43,609 INFO [loop_until]: kubectl --namespace=xlou --context=gke_engineeringpit_us-east1-d_xlou-cdm create configmap lodemon-deployments --from-file=/mnt/disk1/xslou/workshop/lodestar-fork/config/deployments_to_monitor.yaml
2025-06-13 21:00:43,609 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0]
2025-06-13 21:00:43,878 INFO [loop_until]: OK (rc = 0)
2025-06-13 21:00:43,878 DEBUG --- stdout ---
2025-06-13 21:00:43,878 DEBUG configmap/lodemon-deployments created
2025-06-13 21:00:43,878 DEBUG --- stderr ---
2025-06-13 21:00:43,878 DEBUG
2025-06-13 21:00:43,878 INFO
2025-06-13 21:00:43,878 INFO ------------- Deploy the lodestarbox with lodemon profile -------------
2025-06-13 21:00:43,878 INFO
2025-06-13 21:00:43,878 INFO [run_command]: skaffold deploy --profile lodemon --config=/tmp/tmp08xj_a2j --status-check=true --namespace=xlou
2025-06-13 21:00:46,605 INFO Starting deploy...
2025-06-13 21:00:47,900 INFO - serviceaccount/k8s-svc-acct-lodemon unchanged
2025-06-13 21:00:47,991 INFO - clusterrolebinding.rbac.authorization.k8s.io/k8s-svc-acct-crb-xlou unchanged
2025-06-13 21:00:48,224 INFO - deployment.apps/lodemon configured
2025-06-13 21:00:48,234 INFO Waiting for deployments to stabilize...
2025-06-13 21:00:53,530 INFO - xlou:deployment/lodemon: creating container lodemon
2025-06-13 21:00:53,540 INFO - xlou:pod/lodemon-5b8fd67bb-krgn9: creating container lodemon
2025-06-13 21:01:13,528 INFO - xlou:deployment/lodemon: waiting for rollout to finish: 1 old replicas are pending termination...
2025-06-13 21:01:39,957 INFO - xlou:deployment/lodemon is ready.
2025-06-13 21:01:39,968 INFO Deployments stabilized in 51.725 seconds
2025-06-13 21:01:40,060 ERROR There is a new version (2.16.0) of Skaffold available. Download it from:
2025-06-13 21:01:40,060 ERROR https://github.com/GoogleContainerTools/skaffold/releases/tag/v2.16.0
2025-06-13 21:01:40,060 ERROR
2025-06-13 21:01:40,060 ERROR Help improve Skaffold with our 2-minute anonymous survey: run 'skaffold survey'
2025-06-13 21:01:40,060 ERROR To help improve the quality of this product, we collect anonymized usage data for details on what is tracked and how we use this data visit . This data is handled in accordance with our privacy policy
2025-06-13 21:01:40,060 ERROR
2025-06-13 21:01:40,060 ERROR You may choose to opt out of this collection by running the following command:
2025-06-13 21:01:40,060 ERROR skaffold config set --global collect-metrics false
2025-06-13 21:01:40,060 INFO
2025-06-13 21:01:40,060 INFO --------------------- Get expected number of pods ---------------------
2025-06-13 21:01:40,060 INFO
2025-06-13 21:01:40,060 INFO [loop_until]: kubectl --namespace=xlou --context=gke_engineeringpit_us-east1-d_xlou-cdm get deployments --selector app=lodemon --output jsonpath={.items[*].spec.replicas}
2025-06-13 21:01:40,060 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0]
2025-06-13 21:01:40,352 INFO [loop_until]: OK (rc = 0)
2025-06-13 21:01:40,352 DEBUG --- stdout ---
2025-06-13 21:01:40,352 DEBUG 1
2025-06-13 21:01:40,352 DEBUG --- stderr ---
2025-06-13 21:01:40,352 DEBUG
2025-06-13 21:01:40,352 INFO
2025-06-13 21:01:40,352 INFO -------------- Reloading pod list for product "lodemon" --------------
2025-06-13 21:01:40,353 INFO
2025-06-13 21:01:40,353 INFO [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou --context=gke_engineeringpit_us-east1-d_xlou-cdm get pods --selector app=lodemon --output jsonpath={.items[*].metadata.name}` | grep 1
2025-06-13 21:01:40,353 INFO [loop_until]: (max_time=360.0, interval=10, expected_rc=[0]
2025-06-13 21:01:40,729 INFO [loop_until]: Function succeeded after 0s (rc=0) - failed to find expected number of element: 1 - retry
2025-06-13 21:01:51,083 INFO [loop_until]: Function succeeded after 10s (rc=0) - failed to find expected number of element: 1 - retry
2025-06-13 21:02:01,426 INFO [loop_until]: Function succeeded after 21s (rc=0) - failed to find expected number of element: 1 - retry
2025-06-13 21:02:11,708 INFO [loop_until]: Function succeeded after 31s (rc=0) - expected number of elements found
2025-06-13 21:02:11,708 INFO [loop_until]: OK (rc = 0)
2025-06-13 21:02:11,708 DEBUG --- stdout ---
2025-06-13 21:02:11,708 DEBUG lodemon-5b8fd67bb-krgn9
2025-06-13 21:02:11,708 DEBUG --- stderr ---
2025-06-13 21:02:11,708 DEBUG
2025-06-13 21:02:11,708 INFO
2025-06-13 21:02:11,708 INFO ------------ Check pod lodemon-5b8fd67bb-krgn9 is running ------------
2025-06-13 21:02:11,708 INFO
2025-06-13 21:02:11,708 INFO [loop_until]: kubectl --namespace=xlou --context=gke_engineeringpit_us-east1-d_xlou-cdm get pods lodemon-5b8fd67bb-krgn9 -o=jsonpath={.status.phase} | grep "Running"
2025-06-13 21:02:11,708 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0]
2025-06-13 21:02:12,043 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
2025-06-13 21:02:12,043 INFO [loop_until]: OK (rc = 0)
2025-06-13 21:02:12,043 DEBUG --- stdout ---
2025-06-13 21:02:12,043 DEBUG Running
2025-06-13 21:02:12,043 DEBUG --- stderr ---
2025-06-13 21:02:12,043 DEBUG
2025-06-13 21:02:12,043 INFO
2025-06-13 21:02:12,043 INFO [loop_until]: kubectl --namespace=xlou --context=gke_engineeringpit_us-east1-d_xlou-cdm get pods lodemon-5b8fd67bb-krgn9 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
2025-06-13 21:02:12,043 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0]
2025-06-13 21:02:12,380 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
2025-06-13 21:02:12,380 INFO [loop_until]: OK (rc = 0)
2025-06-13 21:02:12,380 DEBUG --- stdout ---
2025-06-13 21:02:12,380 DEBUG true
2025-06-13 21:02:12,380 DEBUG --- stderr ---
2025-06-13 21:02:12,380 DEBUG
2025-06-13 21:02:12,380 INFO
2025-06-13 21:02:12,381 INFO [loop_until]: kubectl --namespace=xlou --context=gke_engineeringpit_us-east1-d_xlou-cdm get pod lodemon-5b8fd67bb-krgn9 --output jsonpath={.status.startTime}
2025-06-13 21:02:12,381 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0]
2025-06-13 21:02:12,656 INFO [loop_until]: OK (rc = 0)
2025-06-13 21:02:12,656 DEBUG --- stdout ---
2025-06-13 21:02:12,656 DEBUG 2025-06-13T21:00:48Z
2025-06-13 21:02:12,656 DEBUG --- stderr ---
2025-06-13 21:02:12,656 DEBUG
2025-06-13 21:02:12,656 INFO
2025-06-13 21:02:12,656 INFO ----- Check pod lodemon-5b8fd67bb-krgn9 filesystem is accessible -----
2025-06-13 21:02:13,427 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
2025-06-13 21:02:13,427 INFO
2025-06-13 21:02:13,428 INFO ----------- Check pod lodemon-5b8fd67bb-krgn9 restart count -----------
2025-06-13 21:02:13,428 INFO
2025-06-13 21:02:13,428 INFO [loop_until]: kubectl --namespace=xlou --context=gke_engineeringpit_us-east1-d_xlou-cdm get pod lodemon-5b8fd67bb-krgn9 --output jsonpath={.status.containerStatuses[*].restartCount}
2025-06-13 21:02:13,428 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0]
2025-06-13 21:02:13,706 INFO [loop_until]: OK (rc = 0)
2025-06-13 21:02:13,706 DEBUG --- stdout ---
2025-06-13 21:02:13,706 DEBUG 0
2025-06-13 21:02:13,706 DEBUG --- stderr ---
2025-06-13 21:02:13,706 DEBUG
2025-06-13 21:02:13,706 INFO Pod lodemon-5b8fd67bb-krgn9 has been restarted 0 times.
2025-06-13 21:02:13,706 INFO
2025-06-13 21:02:13,706 INFO --------------------- Get expected number of pods ---------------------
2025-06-13 21:02:13,706 INFO
2025-06-13 21:02:13,706 INFO [loop_until]: kubectl --namespace=xlou --context=gke_engineeringpit_us-east1-d_xlou-cdm get deployments --selector app=lodemon --output jsonpath={.items[*].spec.replicas}
2025-06-13 21:02:13,706 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0]
2025-06-13 21:02:13,986 INFO [loop_until]: OK (rc = 0)
2025-06-13 21:02:13,986 DEBUG --- stdout ---
2025-06-13 21:02:13,986 DEBUG 1
2025-06-13 21:02:13,986 DEBUG --- stderr ---
2025-06-13 21:02:13,986 DEBUG
2025-06-13 21:02:13,987 INFO
2025-06-13 21:02:13,987 INFO -------------- Waiting for 1 expected pod(s) to be ready --------------
2025-06-13 21:02:13,987 INFO
2025-06-13 21:02:13,987 INFO [loop_until]: kubectl --namespace=xlou --context=gke_engineeringpit_us-east1-d_xlou-cdm get deployments lodemon --output jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
2025-06-13 21:02:13,987 INFO [loop_until]: (max_time=900, interval=30, expected_rc=[0]
2025-06-13 21:02:14,264 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
2025-06-13 21:02:14,264 INFO [loop_until]: OK (rc = 0)
2025-06-13 21:02:14,264 DEBUG --- stdout ---
2025-06-13 21:02:14,264 DEBUG ready:1 replicas:1
2025-06-13 21:02:14,264 DEBUG --- stderr ---
2025-06-13 21:02:14,264 DEBUG
2025-06-13 21:02:14,264 INFO Component lodemon is alive
2025-06-13 21:02:15,082 INFO Dumping pod description and logs to /mnt/disk1/xslou/workshop/lodestar-fork/results/pyrock/pod-logs/20250613_210214-after-lodemon-deploy/lodemon-5b8fd67bb-krgn9.txt
2025-06-13 21:02:15,082 INFO Dumping pod description and logs to /mnt/disk1/xslou/workshop/lodestar-fork/results/pyrock/pod-logs/20250613_210214-after-lodemon-deploy/lodemon-5b8fd67bb-krgn9.txt
2025-06-13 21:02:15,082 INFO Check pod logs for errors
2025-06-13 21:02:15,992 INFO Dumping pod description and logs to /mnt/disk1/xslou/workshop/lodestar-fork/results/pyrock/pod-logs/20250613_210214-after-lodemon-deploy/_cpu-info.txt
2025-06-13 21:02:15,993 INFO
2025-06-13 21:02:15,995 INFO [loop_until]: kubectl --namespace=xlou --context=gke_engineeringpit_us-east1-d_xlou-cdm exec lodemon-5b8fd67bb-krgn9 -- curl --fail --silent --show-error http://localhost:8080/monitoring/start
2025-06-13 21:02:15,995 INFO [loop_until]: (max_time=600, interval=5, expected_rc=[0]
2025-06-13 21:02:16,790 INFO [loop_until]: OK (rc = 0)
2025-06-13 21:02:16,790 DEBUG --- stdout ---
2025-06-13 21:02:16,790 DEBUG {"Status": "OK", "Message": "Monitoring has been started"}
2025-06-13 21:02:16,790 DEBUG --- stderr ---
2025-06-13 21:02:16,790 DEBUG
2025-06-13 21:02:17,590 INFO Dumping pod description and logs to /mnt/disk1/xslou/workshop/lodestar-fork/results/pyrock/idc_benchmark/pod-logs/stack/after-lodemon-deployment/lodemon-5b8fd67bb-krgn9.txt
2025-06-13 21:02:17,590 INFO Dumping pod description and logs to /mnt/disk1/xslou/workshop/lodestar-fork/results/pyrock/idc_benchmark/pod-logs/stack/after-lodemon-deployment/lodemon-5b8fd67bb-krgn9.txt
2025-06-13 21:02:17,590 INFO Check pod logs for errors
2025-06-13 21:02:18,408 INFO Dumping pod description and logs to /mnt/disk1/xslou/workshop/lodestar-fork/results/pyrock/idc_benchmark/pod-logs/stack/after-lodemon-deployment/_cpu-info.txt
________________________________________________________________________________
[2025-06-13 21:02:18] Monitoring_start post : Post method
________________________________________________________________________________
Setting result to PASS
Task has been successfully stopped