Test Report

Test Suite: openshift-tests

Duration4337.0 sec
Test Cases297
Failures16

Results Index


Test Results


Test Class: e2e_tests
[sig-network]_Networking_Granular_Checks:_Pods_should_function_for_intra-pod_communication:_http_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 223.0s

[k8s.io]_[sig-node]_Pods_Extended_[k8s.io]_Pods_Set_QOS_Class_should_be_set_on_Pods_with_matching_resource_requests_and_limits_for_memory_and_cpu_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.3s

[sig-storage]_Projected_downwardAPI_should_provide_container's_cpu_limit_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 54.5s

[sig-storage]_Projected_configMap_should_be_consumable_from_pods_in_volume_with_mappings_as_non-root_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 55.6s

[k8s.io]_Security_Context_When_creating_a_pod_with_readOnlyRootFilesystem_should_run_the_container_with_writable_rootfs_when_readOnlyRootFilesystem=false_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 41.9s

[sig-cli]_Kubectl_client_Kubectl_replace_should_update_a_single-container_pod's_image__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 57.2s

[sig-api-machinery]_CustomResourceDefinition_resources_[Privileged:ClusterAdmin]_Simple_CustomResourceDefinition_creating/deleting_custom_resource_definition_objects_works__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 2.5s

[sig-storage]_EmptyDir_volumes_volume_on_tmpfs_should_have_the_correct_mode_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 61.0s

[k8s.io]_[sig-node]_Events_should_be_sent_by_kubelets_and_the_scheduler_about_pods_scheduling_and_running__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 64.0s

[sig-storage]_Projected_downwardAPI_should_update_labels_on_modification_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 60.0s

[sig-network]_DNS_should_provide_DNS_for_the_cluster__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 43.6s

[sig-apps]_Job_should_delete_a_job_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 75.0s

[sig-network]_Services_should_be_able_to_switch_session_affinity_for_service_with_type_clusterIP_[LinuxOnly]_[Conformance]_[Skipped:Network/OVNKubernetes]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 236.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/network/service.go:233]: Sep  9 04:34:50.269: Affinity should hold but didn't.

Stdout
I0909 04:31:06.863188  870321 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:31:06.926: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:31:06.956: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:31:07.035: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:31:07.035: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:31:07.035: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:31:07.055: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:31:07.060: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:31:07.077: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename services
Sep  9 04:31:07.267: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:31:07.648: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/network/service.go:731
[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating service in namespace e2e-services-458
STEP: creating service affinity-clusterip-transition in namespace e2e-services-458
STEP: creating replication controller affinity-clusterip-transition in namespace e2e-services-458
I0909 04:31:07.740514  870321 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: e2e-services-458, replica count: 3
I0909 04:31:10.790866  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:13.791087  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:16.791279  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:19.791524  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:22.791759  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:25.791994  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:28.792456  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:31.792671  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:34.792950  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:37.793357  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:40.793594  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:43.793815  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:46.794025  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:49.794262  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:52.794798  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:55.794997  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:31:58.795244  870321 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep  9 04:31:58.854: INFO: Creating new exec pod
Sep  9 04:32:12.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-458 execpod-affinityfngdd -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80'
Sep  9 04:32:14.572: INFO: rc: 1
Sep  9 04:32:14.572: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-458 execpod-affinityfngdd -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-transition 80
nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:32:15.572: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-458 execpod-affinityfngdd -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80'
Sep  9 04:32:16.537: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n"
Sep  9 04:32:16.537: INFO: stdout: ""
Sep  9 04:32:16.537: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-458 execpod-affinityfngdd -- /bin/sh -x -c nc -zv -t -w 2 172.30.38.145 80'
Sep  9 04:32:17.011: INFO: stderr: "+ nc -zv -t -w 2 172.30.38.145 80\nConnection to 172.30.38.145 80 port [tcp/http] succeeded!\n"
Sep  9 04:32:17.011: INFO: stdout: ""
Sep  9 04:32:17.170: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-458 execpod-affinityfngdd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.30.38.145:80/ ; done'
Sep  9 04:32:17.833: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n"
Sep  9 04:32:17.834: INFO: stdout: "\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx"
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:17.834: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:47.834: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-458 execpod-affinityfngdd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.30.38.145:80/ ; done'
Sep  9 04:32:48.481: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n"
Sep  9 04:32:48.482: INFO: stdout: "\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-7rkcx"
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:32:48.482: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:48.555: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-458 execpod-affinityfngdd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.30.38.145:80/ ; done'
Sep  9 04:32:49.128: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n"
Sep  9 04:32:49.128: INFO: stdout: "\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-f8v2l"
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:32:49.128: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:19.128: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-458 execpod-affinityfngdd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.30.38.145:80/ ; done'
Sep  9 04:33:19.691: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n"
Sep  9 04:33:19.691: INFO: stdout: "\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-ls2rh"
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:33:19.691: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:33:49.128: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-458 execpod-affinityfngdd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.30.38.145:80/ ; done'
Sep  9 04:33:49.719: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n"
Sep  9 04:33:49.719: INFO: stdout: "\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l"
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:33:49.719: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:19.128: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-458 execpod-affinityfngdd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.30.38.145:80/ ; done'
Sep  9 04:34:19.818: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n"
Sep  9 04:34:19.818: INFO: stdout: "\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-ls2rh"
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:19.818: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:49.129: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-458 execpod-affinityfngdd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.30.38.145:80/ ; done'
Sep  9 04:34:49.768: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n"
Sep  9 04:34:49.769: INFO: stdout: "\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l"
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:49.769: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:49.769: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-458 execpod-affinityfngdd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.30.38.145:80/ ; done'
Sep  9 04:34:50.268: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.30.38.145:80/\n"
Sep  9 04:34:50.269: INFO: stdout: "\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-ls2rh\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-f8v2l\naffinity-clusterip-transition-7rkcx\naffinity-clusterip-transition-f8v2l"
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-ls2rh
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-7rkcx
Sep  9 04:34:50.269: INFO: Received response from host: affinity-clusterip-transition-f8v2l
Sep  9 04:34:50.269: INFO: [affinity-clusterip-transition-f8v2l affinity-clusterip-transition-ls2rh affinity-clusterip-transition-ls2rh affinity-clusterip-transition-7rkcx affinity-clusterip-transition-7rkcx affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-7rkcx affinity-clusterip-transition-f8v2l affinity-clusterip-transition-ls2rh affinity-clusterip-transition-f8v2l affinity-clusterip-transition-7rkcx affinity-clusterip-transition-7rkcx affinity-clusterip-transition-ls2rh affinity-clusterip-transition-f8v2l affinity-clusterip-transition-ls2rh affinity-clusterip-transition-f8v2l affinity-clusterip-transition-ls2rh affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-7rkcx affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-ls2rh affinity-clusterip-transition-7rkcx affinity-clusterip-transition-ls2rh affinity-clusterip-transition-ls2rh affinity-clusterip-transition-7rkcx affinity-clusterip-transition-7rkcx affinity-clusterip-transition-ls2rh affinity-clusterip-transition-ls2rh affinity-clusterip-transition-7rkcx affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-7rkcx affinity-clusterip-transition-f8v2l affinity-clusterip-transition-ls2rh affinity-clusterip-transition-7rkcx affinity-clusterip-transition-7rkcx affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-ls2rh affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-7rkcx affinity-clusterip-transition-ls2rh affinity-clusterip-transition-7rkcx affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-7rkcx affinity-clusterip-transition-ls2rh affinity-clusterip-transition-ls2rh affinity-clusterip-transition-f8v2l affinity-clusterip-transition-ls2rh affinity-clusterip-transition-7rkcx affinity-clusterip-transition-ls2rh affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-ls2rh affinity-clusterip-transition-ls2rh affinity-clusterip-transition-ls2rh affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-ls2rh affinity-clusterip-transition-ls2rh affinity-clusterip-transition-7rkcx affinity-clusterip-transition-ls2rh affinity-clusterip-transition-ls2rh affinity-clusterip-transition-ls2rh affinity-clusterip-transition-ls2rh affinity-clusterip-transition-7rkcx affinity-clusterip-transition-7rkcx affinity-clusterip-transition-7rkcx affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-f8v2l affinity-clusterip-transition-7rkcx affinity-clusterip-transition-ls2rh affinity-clusterip-transition-f8v2l affinity-clusterip-transition-ls2rh affinity-clusterip-transition-ls2rh affinity-clusterip-transition-7rkcx affinity-clusterip-transition-7rkcx affinity-clusterip-transition-ls2rh affinity-clusterip-transition-7rkcx affinity-clusterip-transition-7rkcx affinity-clusterip-transition-7rkcx affinity-clusterip-transition-f8v2l affinity-clusterip-transition-7rkcx affinity-clusterip-transition-f8v2l]
Sep  9 04:34:50.269: FAIL: Affinity should hold but didn't.

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.checkAffinityFailed(0xc00163e800, 0x60, 0x80, 0x5cf1dad, 0x20)
	@/k8s.io/kubernetes/test/e2e/network/service.go:233 +0xde
k8s.io/kubernetes/test/e2e/network.checkAffinity(0x6b71de0, 0xc000d9be40, 0xc001fa1000, 0xc0016616e0, 0xd, 0x50, 0x1, 0xc001fa1001)
	@/k8s.io/kubernetes/test/e2e/network/service.go:192 +0x212
k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000b85e40, 0x6b71de0, 0xc000d9be40, 0xc0008c26c0, 0x431201)
	@/k8s.io/kubernetes/test/e2e/network/service.go:3395 +0x834
k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...)
	@/k8s.io/kubernetes/test/e2e/network/service.go:3335
k8s.io/kubernetes/test/e2e/network.glob..func25.27()
	@/k8s.io/kubernetes/test/e2e/network/service.go:2442 +0xa1
github.com/openshift/origin/pkg/test/ginkgo.(*TestOptions).Run(0xc0018f2e10, 0xc0018b3490, 0x1, 0x1, 0x0, 0x22442a0)
	github.com/openshift/origin@/pkg/test/ginkgo/cmd_runtest.go:61 +0x41f
main.newRunTestCommand.func1.1()
	github.com/openshift/origin@/cmd/openshift-tests/openshift-tests.go:239 +0x4e
github.com/openshift/origin/test/extended/util.WithCleanup(0xc001a33bd8)
	github.com/openshift/origin@/test/extended/util/test.go:167 +0x58
main.newRunTestCommand.func1(0xc000a4d680, 0xc0018b3490, 0x1, 0x1, 0x0, 0x0)
	github.com/openshift/origin@/cmd/openshift-tests/openshift-tests.go:239 +0x1be
github.com/spf13/cobra.(*Command).execute(0xc000a4d680, 0xc0018b3450, 0x1, 0x1, 0xc000a4d680, 0xc0018b3450)
	@/github.com/spf13/cobra/command.go:826 +0x460
github.com/spf13/cobra.(*Command).ExecuteC(0xc000a4ca00, 0x0, 0x696bee0, 0x9eaaea8)
	@/github.com/spf13/cobra/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
	@/github.com/spf13/cobra/command.go:864
main.main.func1(0xc000a4ca00, 0x0, 0x0)
	github.com/openshift/origin@/cmd/openshift-tests/openshift-tests.go:61 +0x9c
main.main()
	github.com/openshift/origin@/cmd/openshift-tests/openshift-tests.go:62 +0x36e
Sep  9 04:34:50.270: INFO: Cleaning up the exec pod
STEP: deleting ReplicationController affinity-clusterip-transition in namespace e2e-services-458, will wait for the garbage collector to delete the pods
Sep  9 04:34:50.410: INFO: Deleting ReplicationController affinity-clusterip-transition took: 23.550928ms
Sep  9 04:34:50.511: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.429828ms
[AfterEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-services-458".
STEP: Found 27 events.
Sep  9 04:35:02.484: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-clusterip-transition-7rkcx: { } Scheduled: Successfully assigned e2e-services-458/affinity-clusterip-transition-7rkcx to ostest-5xqm8-worker-0-cbbx9
Sep  9 04:35:02.484: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-clusterip-transition-f8v2l: { } Scheduled: Successfully assigned e2e-services-458/affinity-clusterip-transition-f8v2l to ostest-5xqm8-worker-0-rzx47
Sep  9 04:35:02.484: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-clusterip-transition-ls2rh: { } Scheduled: Successfully assigned e2e-services-458/affinity-clusterip-transition-ls2rh to ostest-5xqm8-worker-0-twrlr
Sep  9 04:35:02.484: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityfngdd: { } Scheduled: Successfully assigned e2e-services-458/execpod-affinityfngdd to ostest-5xqm8-worker-0-cbbx9
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:07 -0400 EDT - event for affinity-clusterip-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-transition-7rkcx
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:07 -0400 EDT - event for affinity-clusterip-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-transition-ls2rh
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:07 -0400 EDT - event for affinity-clusterip-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-transition-f8v2l
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:37 -0400 EDT - event for affinity-clusterip-transition-ls2rh: {multus } AddedInterface: Add eth0 [10.128.202.29/23]
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:38 -0400 EDT - event for affinity-clusterip-transition-ls2rh: {kubelet ostest-5xqm8-worker-0-twrlr} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:39 -0400 EDT - event for affinity-clusterip-transition-ls2rh: {kubelet ostest-5xqm8-worker-0-twrlr} Started: Started container affinity-clusterip-transition
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:39 -0400 EDT - event for affinity-clusterip-transition-ls2rh: {kubelet ostest-5xqm8-worker-0-twrlr} Created: Created container affinity-clusterip-transition
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:49 -0400 EDT - event for affinity-clusterip-transition-7rkcx: {multus } AddedInterface: Add eth0 [10.128.202.240/23]
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:49 -0400 EDT - event for affinity-clusterip-transition-7rkcx: {kubelet ostest-5xqm8-worker-0-cbbx9} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:50 -0400 EDT - event for affinity-clusterip-transition-7rkcx: {kubelet ostest-5xqm8-worker-0-cbbx9} Started: Started container affinity-clusterip-transition
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:50 -0400 EDT - event for affinity-clusterip-transition-7rkcx: {kubelet ostest-5xqm8-worker-0-cbbx9} Created: Created container affinity-clusterip-transition
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:55 -0400 EDT - event for affinity-clusterip-transition-f8v2l: {multus } AddedInterface: Add eth0 [10.128.203.149/23]
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:56 -0400 EDT - event for affinity-clusterip-transition-f8v2l: {kubelet ostest-5xqm8-worker-0-rzx47} Started: Started container affinity-clusterip-transition
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:56 -0400 EDT - event for affinity-clusterip-transition-f8v2l: {kubelet ostest-5xqm8-worker-0-rzx47} Created: Created container affinity-clusterip-transition
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:31:56 -0400 EDT - event for affinity-clusterip-transition-f8v2l: {kubelet ostest-5xqm8-worker-0-rzx47} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:32:08 -0400 EDT - event for execpod-affinityfngdd: {multus } AddedInterface: Add eth0 [10.128.202.165/23]
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:32:08 -0400 EDT - event for execpod-affinityfngdd: {kubelet ostest-5xqm8-worker-0-cbbx9} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:32:09 -0400 EDT - event for execpod-affinityfngdd: {kubelet ostest-5xqm8-worker-0-cbbx9} Started: Started container agnhost-pause
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:32:09 -0400 EDT - event for execpod-affinityfngdd: {kubelet ostest-5xqm8-worker-0-cbbx9} Created: Created container agnhost-pause
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:34:50 -0400 EDT - event for affinity-clusterip-transition-7rkcx: {kubelet ostest-5xqm8-worker-0-cbbx9} Killing: Stopping container affinity-clusterip-transition
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:34:50 -0400 EDT - event for affinity-clusterip-transition-f8v2l: {kubelet ostest-5xqm8-worker-0-rzx47} Killing: Stopping container affinity-clusterip-transition
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:34:50 -0400 EDT - event for affinity-clusterip-transition-ls2rh: {kubelet ostest-5xqm8-worker-0-twrlr} Killing: Stopping container affinity-clusterip-transition
Sep  9 04:35:02.484: INFO: At 2020-09-09 04:34:50 -0400 EDT - event for execpod-affinityfngdd: {kubelet ostest-5xqm8-worker-0-cbbx9} Killing: Stopping container agnhost-pause
Sep  9 04:35:02.495: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep  9 04:35:02.495: INFO: 
Sep  9 04:35:02.513: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:35:02.513: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-services-458" for this suite.
[AfterEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/network/service.go:735
Sep  9 04:35:02.558: INFO: Running AfterSuite actions on all nodes
Sep  9 04:35:02.558: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/network/service.go:233]: Sep  9 04:34:50.269: Affinity should hold but didn't.

Stderr
[sig-storage]_ConfigMap_should_be_consumable_from_pods_in_volume_with_defaultMode_set_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 110.0s

[sig-api-machinery]_Namespaces_[Serial]_should_patch_a_Namespace_[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.5s

[k8s.io]_Security_Context_When_creating_a_pod_with_privileged_should_run_the_container_as_unprivileged_when_false_[LinuxOnly]_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 107.0s

[sig-api-machinery]_CustomResourcePublishOpenAPI_[Privileged:ClusterAdmin]_works_for_CRD_preserving_unknown_fields_at_the_schema_root_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 98.0s

[sig-api-machinery]_Garbage_collector_should_not_be_blocked_by_dependency_circle_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 7.0s

[k8s.io]_[sig-node]_PreStop_should_call_prestop_when_killing_a_pod__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 149.0s

[sig-apps]_Deployment_deployment_should_delete_old_replica_sets_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 35.0s

[sig-network]_Networking_Granular_Checks:_Pods_should_function_for_node-pod_communication:_udp_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 103.0s

[sig-api-machinery]_CustomResourcePublishOpenAPI_[Privileged:ClusterAdmin]_removes_definition_from_spec_when_one_version_gets_changed_to_not_be_served_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 157.0s

[sig-storage]_Downward_API_volume_should_provide_container's_cpu_request_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 35.9s

[sig-scheduling]_SchedulerPredicates_[Serial]_validates_resource_limits_of_pods_that_are_allowed_to_run__[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 27.3s

[k8s.io]_Container_Lifecycle_Hook_when_create_a_pod_with_lifecycle_hook_should_execute_poststart_exec_hook_properly_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 138.0s

[sig-apps]_Deployment_RecreateDeployment_should_delete_old_pods_and_create_new_ones_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 59.5s

[sig-network]_Ingress_API_should_support_creating_Ingress_API_operations_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 2.4s

[sig-apps]_Daemon_set_[Serial]_should_rollback_without_unnecessary_restarts_[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 55.5s

[sig-network]_Networking_Granular_Checks:_Pods_should_function_for_intra-pod_communication:_udp_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 83.0s

[sig-network]_DNS_should_resolve_DNS_of_partial_qualified_names_for_services_[LinuxOnly]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 378.0s

[sig-api-machinery]_Secrets_should_be_consumable_via_the_environment_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 71.0s

[sig-storage]_EmptyDir_volumes_should_support_(non-root,0777,default)_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 73.0s

[k8s.io]_Kubelet_when_scheduling_a_read_only_busybox_container_should_not_write_to_root_filesystem_[LinuxOnly]_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 44.9s

[sig-storage]_EmptyDir_volumes_should_support_(non-root,0666,tmpfs)_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 46.4s

[k8s.io]_Probing_container_should_be_restarted_with_a_/healthz_http_liveness_probe_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 58.6s

[sig-apps]_StatefulSet_[k8s.io]_Basic_StatefulSet_functionality_[StatefulSetBasic]_Burst_scaling_should_run_to_completion_even_with_unhealthy_pods_[Slow]_[Conformance]_[Suite:k8s]
e2e_tests
Time Taken: 416.0s

[sig-storage]_Projected_downwardAPI_should_provide_container's_cpu_request_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 41.8s

[sig-network]_Services_should_have_session_affinity_timeout_work_for_service_with_type_clusterIP_[LinuxOnly]_[Conformance]_[Skipped:Network/OVNKubernetes]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 259.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/network/service.go:3313]: Unexpected error:
    <*errors.errorString | 0xc001da65f0>: {
        s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol
occurred

Stdout
I0909 04:26:48.280101  851029 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:26:48.330: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:26:48.364: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:26:48.434: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:26:48.434: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:26:48.434: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:26:48.449: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:26:48.455: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:26:48.479: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename services
Sep  9 04:26:48.811: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:26:49.090: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/network/service.go:731
[It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating service in namespace e2e-services-252
Sep  9 04:26:51.193: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Sep  9 04:26:51.609: INFO: rc: 7
Sep  9 04:26:51.662: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep  9 04:26:51.680: INFO: Pod kube-proxy-mode-detector still exists
Sep  9 04:26:53.680: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep  9 04:26:53.692: INFO: Pod kube-proxy-mode-detector still exists
Sep  9 04:26:55.680: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep  9 04:26:55.696: INFO: Pod kube-proxy-mode-detector still exists
Sep  9 04:26:57.680: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep  9 04:26:57.693: INFO: Pod kube-proxy-mode-detector still exists
Sep  9 04:26:59.680: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep  9 04:26:59.708: INFO: Pod kube-proxy-mode-detector still exists
Sep  9 04:27:01.680: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep  9 04:27:01.701: INFO: Pod kube-proxy-mode-detector still exists
Sep  9 04:27:03.680: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep  9 04:27:03.690: INFO: Pod kube-proxy-mode-detector still exists
Sep  9 04:27:05.680: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep  9 04:27:05.699: INFO: Pod kube-proxy-mode-detector still exists
Sep  9 04:27:07.680: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep  9 04:27:07.696: INFO: Pod kube-proxy-mode-detector still exists
Sep  9 04:27:09.680: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep  9 04:27:09.699: INFO: Pod kube-proxy-mode-detector still exists
Sep  9 04:27:11.680: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep  9 04:27:11.691: INFO: Pod kube-proxy-mode-detector still exists
Sep  9 04:27:13.680: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep  9 04:27:13.690: INFO: Pod kube-proxy-mode-detector still exists
Sep  9 04:27:15.682: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep  9 04:27:15.698: INFO: Pod kube-proxy-mode-detector no longer exists
Sep  9 04:27:15.699: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
command terminated with exit code 7

error:
exit status 7
STEP: creating service affinity-clusterip-timeout in namespace e2e-services-252
STEP: creating replication controller affinity-clusterip-timeout in namespace e2e-services-252
I0909 04:27:15.762945  851029 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: e2e-services-252, replica count: 3
I0909 04:27:18.813627  851029 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:27:21.813846  851029 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:27:24.814128  851029 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:27:27.814375  851029 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:27:30.814619  851029 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:27:33.814908  851029 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:27:36.815171  851029 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:27:39.815406  851029 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:27:42.815657  851029 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:27:45.815913  851029 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep  9 04:27:45.847: INFO: Creating new exec pod
Sep  9 04:27:58.927: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:01.498: INFO: rc: 1
Sep  9 04:28:01.498: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:02.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:04.951: INFO: rc: 1
Sep  9 04:28:04.951: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:05.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:08.008: INFO: rc: 1
Sep  9 04:28:08.008: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:08.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:10.960: INFO: rc: 1
Sep  9 04:28:10.960: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:11.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:13.962: INFO: rc: 1
Sep  9 04:28:13.962: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:14.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:16.938: INFO: rc: 1
Sep  9 04:28:16.938: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:17.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:20.024: INFO: rc: 1
Sep  9 04:28:20.024: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:20.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:23.009: INFO: rc: 1
Sep  9 04:28:23.009: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:23.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:25.954: INFO: rc: 1
Sep  9 04:28:25.954: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:26.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:28.987: INFO: rc: 1
Sep  9 04:28:28.987: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:29.500: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:31.994: INFO: rc: 1
Sep  9 04:28:31.994: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:32.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:34.961: INFO: rc: 1
Sep  9 04:28:34.961: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:35.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:38.003: INFO: rc: 1
Sep  9 04:28:38.003: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:38.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:41.000: INFO: rc: 1
Sep  9 04:28:41.000: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:41.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:43.970: INFO: rc: 1
Sep  9 04:28:43.970: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:44.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:46.976: INFO: rc: 1
Sep  9 04:28:46.976: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:47.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:49.969: INFO: rc: 1
Sep  9 04:28:49.969: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:50.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:53.050: INFO: rc: 1
Sep  9 04:28:53.050: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:53.501: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:28:56.809: INFO: rc: 1
Sep  9 04:28:56.810: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:28:57.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:29:26.419: INFO: rc: 1
Sep  9 04:29:26.419: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:29:26.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:29:29.033: INFO: rc: 1
Sep  9 04:29:29.033: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:29:29.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:29:32.017: INFO: rc: 1
Sep  9 04:29:32.017: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:29:32.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:29:35.103: INFO: rc: 1
Sep  9 04:29:35.103: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:29:35.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:29:38.550: INFO: rc: 1
Sep  9 04:29:38.550: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:29:39.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:29:41.965: INFO: rc: 1
Sep  9 04:29:41.965: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:29:42.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:29:44.962: INFO: rc: 1
Sep  9 04:29:44.962: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:29:45.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:29:47.944: INFO: rc: 1
Sep  9 04:29:47.944: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:29:48.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:29:51.158: INFO: rc: 1
Sep  9 04:29:51.158: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:29:51.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:29:54.140: INFO: rc: 1
Sep  9 04:29:54.140: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:29:54.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:29:56.948: INFO: rc: 1
Sep  9 04:29:56.948: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:29:57.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:29:59.974: INFO: rc: 1
Sep  9 04:29:59.974: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:30:00.499: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:30:03.040: INFO: rc: 1
Sep  9 04:30:03.040: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:30:03.040: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Sep  9 04:30:05.610: INFO: rc: 1
Sep  9 04:30:05.610: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-252 execpod-affinityxwmtc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:30:05.611: INFO: Cleaning up the exec pod
STEP: deleting ReplicationController affinity-clusterip-timeout in namespace e2e-services-252, will wait for the garbage collector to delete the pods
Sep  9 04:30:05.820: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 43.363062ms
Sep  9 04:30:06.220: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 400.70825ms
[AfterEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-services-252".
STEP: Found 34 events.
Sep  9 04:31:06.448: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-clusterip-timeout-7bs6z: { } Scheduled: Successfully assigned e2e-services-252/affinity-clusterip-timeout-7bs6z to ostest-5xqm8-worker-0-cbbx9
Sep  9 04:31:06.448: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-clusterip-timeout-j28fn: { } Scheduled: Successfully assigned e2e-services-252/affinity-clusterip-timeout-j28fn to ostest-5xqm8-worker-0-twrlr
Sep  9 04:31:06.449: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-clusterip-timeout-w9fc2: { } Scheduled: Successfully assigned e2e-services-252/affinity-clusterip-timeout-w9fc2 to ostest-5xqm8-worker-0-rzx47
Sep  9 04:31:06.449: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityxwmtc: { } Scheduled: Successfully assigned e2e-services-252/execpod-affinityxwmtc to ostest-5xqm8-worker-0-rzx47
Sep  9 04:31:06.449: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-mode-detector: { } Scheduled: Successfully assigned e2e-services-252/kube-proxy-mode-detector to ostest-5xqm8-worker-0-rzx47
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:26:50 -0400 EDT - event for kube-proxy-mode-detector: {kubelet ostest-5xqm8-worker-0-rzx47} Created: Created container detector
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:26:50 -0400 EDT - event for kube-proxy-mode-detector: {kubelet ostest-5xqm8-worker-0-rzx47} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:26:50 -0400 EDT - event for kube-proxy-mode-detector: {kubelet ostest-5xqm8-worker-0-rzx47} Started: Started container detector
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:26:52 -0400 EDT - event for kube-proxy-mode-detector: {kubelet ostest-5xqm8-worker-0-rzx47} Killing: Stopping container detector
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:16 -0400 EDT - event for affinity-clusterip-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-timeout-j28fn
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:16 -0400 EDT - event for affinity-clusterip-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-timeout-7bs6z
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:16 -0400 EDT - event for affinity-clusterip-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-timeout-w9fc2
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:17 -0400 EDT - event for affinity-clusterip-timeout-w9fc2: {kubelet ostest-5xqm8-worker-0-rzx47} FailedMount: MountVolume.SetUp failed for volume "default-token-glj5z" : failed to sync secret cache: timed out waiting for the condition
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:23 -0400 EDT - event for affinity-clusterip-timeout-w9fc2: {multus } AddedInterface: Add eth0 [10.128.149.212/23]
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:24 -0400 EDT - event for affinity-clusterip-timeout-w9fc2: {kubelet ostest-5xqm8-worker-0-rzx47} Created: Created container affinity-clusterip-timeout
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:24 -0400 EDT - event for affinity-clusterip-timeout-w9fc2: {kubelet ostest-5xqm8-worker-0-rzx47} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:24 -0400 EDT - event for affinity-clusterip-timeout-w9fc2: {kubelet ostest-5xqm8-worker-0-rzx47} Started: Started container affinity-clusterip-timeout
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:37 -0400 EDT - event for affinity-clusterip-timeout-j28fn: {multus } AddedInterface: Add eth0 [10.128.148.36/23]
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:37 -0400 EDT - event for affinity-clusterip-timeout-j28fn: {kubelet ostest-5xqm8-worker-0-twrlr} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:38 -0400 EDT - event for affinity-clusterip-timeout-j28fn: {kubelet ostest-5xqm8-worker-0-twrlr} Started: Started container affinity-clusterip-timeout
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:38 -0400 EDT - event for affinity-clusterip-timeout-j28fn: {kubelet ostest-5xqm8-worker-0-twrlr} Created: Created container affinity-clusterip-timeout
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:42 -0400 EDT - event for affinity-clusterip-timeout-7bs6z: {multus } AddedInterface: Add eth0 [10.128.149.64/23]
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:43 -0400 EDT - event for affinity-clusterip-timeout-7bs6z: {kubelet ostest-5xqm8-worker-0-cbbx9} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:43 -0400 EDT - event for affinity-clusterip-timeout-7bs6z: {kubelet ostest-5xqm8-worker-0-cbbx9} Created: Created container affinity-clusterip-timeout
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:43 -0400 EDT - event for affinity-clusterip-timeout-7bs6z: {kubelet ostest-5xqm8-worker-0-cbbx9} Started: Started container affinity-clusterip-timeout
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:55 -0400 EDT - event for execpod-affinityxwmtc: {kubelet ostest-5xqm8-worker-0-rzx47} Started: Started container agnhost-pause
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:55 -0400 EDT - event for execpod-affinityxwmtc: {kubelet ostest-5xqm8-worker-0-rzx47} Created: Created container agnhost-pause
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:27:55 -0400 EDT - event for execpod-affinityxwmtc: {kubelet ostest-5xqm8-worker-0-rzx47} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:30:05 -0400 EDT - event for execpod-affinityxwmtc: {kubelet ostest-5xqm8-worker-0-rzx47} Killing: Stopping container agnhost-pause
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:30:06 -0400 EDT - event for affinity-clusterip-timeout: {endpoint-slice-controller } FailedToUpdateEndpointSlices: Error updating Endpoint Slices for Service e2e-services-252/affinity-clusterip-timeout: Error updating affinity-clusterip-timeout-t8p69 EndpointSlice for Service e2e-services-252/affinity-clusterip-timeout: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "affinity-clusterip-timeout-t8p69": the object has been modified; please apply your changes to the latest version and try again
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:30:06 -0400 EDT - event for affinity-clusterip-timeout: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint e2e-services-252/affinity-clusterip-timeout: Operation cannot be fulfilled on endpoints "affinity-clusterip-timeout": the object has been modified; please apply your changes to the latest version and try again
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:30:06 -0400 EDT - event for affinity-clusterip-timeout-7bs6z: {kubelet ostest-5xqm8-worker-0-cbbx9} Killing: Stopping container affinity-clusterip-timeout
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:30:06 -0400 EDT - event for affinity-clusterip-timeout-j28fn: {kubelet ostest-5xqm8-worker-0-twrlr} Killing: Stopping container affinity-clusterip-timeout
Sep  9 04:31:06.449: INFO: At 2020-09-09 04:30:06 -0400 EDT - event for affinity-clusterip-timeout-w9fc2: {kubelet ostest-5xqm8-worker-0-rzx47} Killing: Stopping container affinity-clusterip-timeout
Sep  9 04:31:06.475: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep  9 04:31:06.475: INFO: 
Sep  9 04:31:06.511: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:31:06.511: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-services-252" for this suite.
[AfterEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/network/service.go:735
Sep  9 04:31:06.612: INFO: Running AfterSuite actions on all nodes
Sep  9 04:31:06.612: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/network/service.go:3313]: Unexpected error:
    <*errors.errorString | 0xc001da65f0>: {
        s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol
occurred

Stderr
[sig-storage]_Downward_API_volume_should_set_DefaultMode_on_files_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 46.5s

[k8s.io]_InitContainer_[NodeConformance]_should_invoke_init_containers_on_a_RestartNever_pod_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 32.6s

[sig-api-machinery]_ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_secret._[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 18.5s

[sig-api-machinery]_ResourceQuota_should_be_able_to_update_and_delete_ResourceQuota._[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.4s

[k8s.io]_InitContainer_[NodeConformance]_should_invoke_init_containers_on_a_RestartAlways_pod_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 40.8s

[sig-cli]_Kubectl_client_Kubectl_server-side_dry-run_should_check_if_kubectl_can_dry-run_update_Pods_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 6.5s

[sig-storage]_ConfigMap_should_be_consumable_from_pods_in_volume_with_mappings_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 39.6s

[sig-cli]_Kubectl_client_Kubectl_patch_should_add_annotations_for_pods_in_rc__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 81.0s

[k8s.io]_Pods_should_contain_environment_variables_for_services_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 327.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/framework/pods.go:103]: Unexpected error:
    <*errors.errorString | 0xc000258860>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

Stdout
I0909 04:23:58.356527  839187 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:23:58.408: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:23:58.443: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:23:58.524: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:23:58.524: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:23:58.524: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:23:58.555: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:23:58.562: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:23:58.583: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [k8s.io] Pods
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename pods
Sep  9 04:24:00.876: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:24:01.204: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  @/k8s.io/kubernetes/test/e2e/common/pods.go:181
[It] should contain environment variables for services [NodeConformance] [Conformance] [sig-node] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Pods
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-pods-6560".
STEP: Found 7 events.
Sep  9 04:29:25.132: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e: { } Scheduled: Successfully assigned e2e-pods-6560/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e to ostest-5xqm8-worker-0-rzx47
Sep  9 04:29:25.132: INFO: At 2020-09-09 04:27:09 -0400 EDT - event for server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e_e2e-pods-6560_61187a01-8366-45cc-b2fc-9f3c00eb71a0_0(c1f62085d54b8fbb278bed495f80f6686c5fb4e5bd5169bb074212eae02d32b9): netplugin failed: "2020/09/09 08:24:01 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-pods-6560;K8S_POD_NAME=server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e;K8S_POD_INFRA_CONTAINER_ID=c1f62085d54b8fbb278bed495f80f6686c5fb4e5bd5169bb074212eae02d32b9, CNI_NETNS=/var/run/netns/2e1c601a-c252-4d3d-a591-7ac8e4606573).\n"
Sep  9 04:29:25.132: INFO: At 2020-09-09 04:27:30 -0400 EDT - event for server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e_e2e-pods-6560_61187a01-8366-45cc-b2fc-9f3c00eb71a0_0(7c2170db2637f97adc50fc3a7aa2b84a359732d53c9085f1c3c35c2f32cd4f62): [e2e-pods-6560/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:29:25.132: INFO: At 2020-09-09 04:27:51 -0400 EDT - event for server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e_e2e-pods-6560_61187a01-8366-45cc-b2fc-9f3c00eb71a0_0(c70e2311bcb1050f756114f8ab358bf13d868abc4a758688494f818a949a1642): [e2e-pods-6560/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:29:25.132: INFO: At 2020-09-09 04:28:13 -0400 EDT - event for server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e_e2e-pods-6560_61187a01-8366-45cc-b2fc-9f3c00eb71a0_0(ab9c6ee7e849e82e95d8b70c677ee3493cba58e6fab0afeb5de903326cf646e9): [e2e-pods-6560/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:29:25.132: INFO: At 2020-09-09 04:28:40 -0400 EDT - event for server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e_e2e-pods-6560_61187a01-8366-45cc-b2fc-9f3c00eb71a0_0(a0f9b4898f7f79e083af6c7a1509499f0e329b0bcd8872b0ef22e6e6f30f276b): [e2e-pods-6560/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:29:25.132: INFO: At 2020-09-09 04:29:03 -0400 EDT - event for server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e_e2e-pods-6560_61187a01-8366-45cc-b2fc-9f3c00eb71a0_0(9df5daf386e19c31342e4d84e86f7ac2bb30d71c57bb340128917ef1ed834989): [e2e-pods-6560/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:29:25.141: INFO: POD                                                  NODE                         PHASE    GRACE  CONDITIONS
Sep  9 04:29:25.141: INFO: server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e  ostest-5xqm8-worker-0-rzx47  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:24:01 -0400 EDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:24:01 -0400 EDT ContainersNotReady containers with unready status: [srv]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:24:01 -0400 EDT ContainersNotReady containers with unready status: [srv]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:24:01 -0400 EDT  }]
Sep  9 04:29:25.142: INFO: 
Sep  9 04:29:25.190: INFO: unable to fetch logs for pods: server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e[e2e-pods-6560].container[srv].error=the server rejected our request for an unknown reason (get pods server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e)
Sep  9 04:29:25.223: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:29:25.223: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-pods-6560" for this suite.
Sep  9 04:29:25.264: INFO: Running AfterSuite actions on all nodes
Sep  9 04:29:25.264: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/framework/pods.go:103]: Unexpected error:
    <*errors.errorString | 0xc000258860>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

Stderr
[sig-node]_Downward_API_should_provide_host_IP_as_an_env_var_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 340.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/framework/util.go:715]: Unexpected error:
    <*errors.errorString | 0xc0017ae860>: {
        s: "expected pod \"downward-api-61172db3-3d96-4583-904f-a2283b9cd03c\" success: Gave up after waiting 5m0s for pod \"downward-api-61172db3-3d96-4583-904f-a2283b9cd03c\" to be \"Succeeded or Failed\"",
    }
    expected pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c" success: Gave up after waiting 5m0s for pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c" to be "Succeeded or Failed"
occurred

Stdout
I0909 04:23:53.770357  838730 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:23:53.831: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:23:53.951: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:23:54.307: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:23:54.307: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:23:54.307: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:23:54.400: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:23:54.446: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:23:54.476: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [sig-node] Downward API
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename downward-api
Sep  9 04:23:55.139: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:23:55.724: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Sep  9 04:23:55.941: INFO: Waiting up to 5m0s for pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c" in namespace "e2e-downward-api-2833" to be "Succeeded or Failed"
Sep  9 04:23:55.973: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.676494ms
Sep  9 04:23:58.018: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076553991s
Sep  9 04:24:00.137: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195340535s
Sep  9 04:24:02.174: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.233133128s
Sep  9 04:24:04.196: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.254631866s
Sep  9 04:24:06.238: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.296787926s
Sep  9 04:24:08.292: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.351117558s
Sep  9 04:24:32.522: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.580918953s
Sep  9 04:24:34.557: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 38.615736022s
Sep  9 04:24:36.598: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.656266538s
Sep  9 04:24:38.663: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 42.721611695s
Sep  9 04:24:40.701: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.760123399s
Sep  9 04:24:42.897: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.955500362s
Sep  9 04:24:44.912: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 48.970642068s
Sep  9 04:24:47.300: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 51.358919694s
Sep  9 04:24:49.310: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 53.368615727s
Sep  9 04:24:51.338: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 55.396231008s
Sep  9 04:24:53.348: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 57.406635248s
Sep  9 04:24:55.407: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 59.465837223s
Sep  9 04:24:57.431: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.490153738s
Sep  9 04:24:59.443: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.501985408s
Sep  9 04:25:01.480: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.539049479s
Sep  9 04:25:03.496: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.554839232s
Sep  9 04:25:05.587: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.645996044s
Sep  9 04:25:07.603: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.661408502s
Sep  9 04:25:09.612: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.670495246s
Sep  9 04:25:11.633: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.692069494s
Sep  9 04:25:13.651: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.710077336s
Sep  9 04:25:15.744: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.802852139s
Sep  9 04:25:17.771: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.829203659s
Sep  9 04:25:19.796: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.854843129s
Sep  9 04:25:21.810: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m25.869017064s
Sep  9 04:25:23.827: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m27.885359077s
Sep  9 04:25:25.845: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m29.903545232s
Sep  9 04:25:27.876: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m31.934950862s
Sep  9 04:25:29.888: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m33.946703898s
Sep  9 04:25:31.897: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m35.955934396s
Sep  9 04:25:33.913: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m37.971373792s
Sep  9 04:25:35.954: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.012928855s
Sep  9 04:25:37.985: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.043294154s
Sep  9 04:25:40.001: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.059870016s
Sep  9 04:25:42.012: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.070709403s
Sep  9 04:25:44.027: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.086135266s
Sep  9 04:25:46.041: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.100081919s
Sep  9 04:25:48.058: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.1167355s
Sep  9 04:25:50.202: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.261170279s
Sep  9 04:25:52.222: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.28054925s
Sep  9 04:25:54.232: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.290650613s
Sep  9 04:25:56.266: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.324986483s
Sep  9 04:25:58.282: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.34115832s
Sep  9 04:26:00.291: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.350090103s
Sep  9 04:26:02.338: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.396331385s
Sep  9 04:26:04.348: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.406612187s
Sep  9 04:26:06.358: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.416236795s
Sep  9 04:26:08.427: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.485780101s
Sep  9 04:26:10.447: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.506060048s
Sep  9 04:26:12.482: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.540500269s
Sep  9 04:26:14.504: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.562865824s
Sep  9 04:26:16.526: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.584835903s
Sep  9 04:26:18.624: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.682729272s
Sep  9 04:26:20.674: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.732929725s
Sep  9 04:26:22.686: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.744372209s
Sep  9 04:26:24.735: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.793415792s
Sep  9 04:26:26.750: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.809084249s
Sep  9 04:26:28.776: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.834400717s
Sep  9 04:26:30.822: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.88020366s
Sep  9 04:26:32.845: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.903611392s
Sep  9 04:26:34.878: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.936833244s
Sep  9 04:26:36.892: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.95030186s
Sep  9 04:26:38.966: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m43.024536882s
Sep  9 04:26:41.075: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m45.133279527s
Sep  9 04:26:43.134: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m47.193020465s
Sep  9 04:26:45.149: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m49.208105096s
Sep  9 04:26:47.221: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m51.279427065s
Sep  9 04:26:49.233: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m53.291641783s
Sep  9 04:26:51.256: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m55.314302197s
Sep  9 04:26:53.265: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m57.323648026s
Sep  9 04:26:55.290: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2m59.348861251s
Sep  9 04:26:57.300: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m1.359139004s
Sep  9 04:26:59.334: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m3.392509119s
Sep  9 04:27:01.342: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m5.401122797s
Sep  9 04:27:03.358: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m7.41623351s
Sep  9 04:27:05.386: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m9.445036232s
Sep  9 04:27:07.452: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m11.510335474s
Sep  9 04:27:09.502: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m13.560848337s
Sep  9 04:27:11.525: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m15.583933255s
Sep  9 04:27:13.542: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m17.600384125s
Sep  9 04:27:15.579: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m19.637863448s
Sep  9 04:27:17.608: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m21.666856161s
Sep  9 04:27:19.627: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m23.686050007s
Sep  9 04:27:21.654: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m25.712427908s
Sep  9 04:27:23.665: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m27.723966739s
Sep  9 04:27:25.675: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m29.733898431s
Sep  9 04:27:27.738: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m31.796919255s
Sep  9 04:27:29.750: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m33.809137433s
Sep  9 04:27:31.761: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m35.820089235s
Sep  9 04:27:33.777: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m37.835428065s
Sep  9 04:27:35.796: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m39.855078302s
Sep  9 04:27:37.815: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m41.873263803s
Sep  9 04:27:39.830: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m43.889081557s
Sep  9 04:27:41.845: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m45.903296075s
Sep  9 04:27:43.860: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m47.91838529s
Sep  9 04:27:45.872: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m49.930835981s
Sep  9 04:27:47.897: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m51.955621828s
Sep  9 04:27:49.956: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.015132068s
Sep  9 04:27:51.971: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.029805458s
Sep  9 04:27:53.997: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.056036621s
Sep  9 04:27:56.006: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.064409263s
Sep  9 04:27:58.017: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.075910167s
Sep  9 04:28:00.043: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.101948551s
Sep  9 04:28:02.058: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.117113691s
Sep  9 04:28:04.086: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.144315672s
Sep  9 04:28:06.100: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.158686493s
Sep  9 04:28:08.118: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.176193849s
Sep  9 04:28:10.156: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.214670686s
Sep  9 04:28:12.175: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.233628152s
Sep  9 04:28:14.199: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.257923049s
Sep  9 04:28:16.217: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.275567568s
Sep  9 04:28:18.237: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.2961623s
Sep  9 04:28:20.439: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.498057629s
Sep  9 04:28:22.456: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.514826236s
Sep  9 04:28:24.499: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.557251667s
Sep  9 04:28:26.562: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.620842081s
Sep  9 04:28:28.593: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.651579216s
Sep  9 04:28:30.620: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.67833435s
Sep  9 04:28:32.657: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.71609596s
Sep  9 04:28:34.695: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.753464045s
Sep  9 04:28:36.704: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.762425372s
Sep  9 04:28:38.753: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.811937964s
Sep  9 04:28:40.806: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.865141233s
Sep  9 04:28:42.818: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.876582734s
Sep  9 04:28:44.842: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.900654348s
Sep  9 04:28:47.005: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.063853743s
Sep  9 04:28:49.020: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.078368777s
Sep  9 04:28:51.041: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.099542003s
Sep  9 04:28:53.053: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.112147932s
Sep  9 04:28:55.069: INFO: Pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.127622773s
Sep  9 04:29:25.120: INFO: Failed to get logs from node "ostest-5xqm8-worker-0-twrlr" pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c" container "dapi-container": the server rejected our request for an unknown reason (get pods downward-api-61172db3-3d96-4583-904f-a2283b9cd03c)
STEP: delete the pod
Sep  9 04:29:25.174: INFO: Waiting for pod downward-api-61172db3-3d96-4583-904f-a2283b9cd03c to disappear
Sep  9 04:29:25.190: INFO: Pod downward-api-61172db3-3d96-4583-904f-a2283b9cd03c still exists
Sep  9 04:29:27.191: INFO: Waiting for pod downward-api-61172db3-3d96-4583-904f-a2283b9cd03c to disappear
Sep  9 04:29:27.215: INFO: Pod downward-api-61172db3-3d96-4583-904f-a2283b9cd03c still exists
Sep  9 04:29:29.191: INFO: Waiting for pod downward-api-61172db3-3d96-4583-904f-a2283b9cd03c to disappear
Sep  9 04:29:29.354: INFO: Pod downward-api-61172db3-3d96-4583-904f-a2283b9cd03c still exists
Sep  9 04:29:31.190: INFO: Waiting for pod downward-api-61172db3-3d96-4583-904f-a2283b9cd03c to disappear
Sep  9 04:29:31.210: INFO: Pod downward-api-61172db3-3d96-4583-904f-a2283b9cd03c still exists
Sep  9 04:29:33.191: INFO: Waiting for pod downward-api-61172db3-3d96-4583-904f-a2283b9cd03c to disappear
Sep  9 04:29:33.244: INFO: Pod downward-api-61172db3-3d96-4583-904f-a2283b9cd03c no longer exists
[AfterEach] [sig-node] Downward API
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-downward-api-2833".
STEP: Found 8 events.
Sep  9 04:29:33.298: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for downward-api-61172db3-3d96-4583-904f-a2283b9cd03c: { } Scheduled: Successfully assigned e2e-downward-api-2833/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c to ostest-5xqm8-worker-0-twrlr
Sep  9 04:29:33.298: INFO: At 2020-09-09 04:27:05 -0400 EDT - event for downward-api-61172db3-3d96-4583-904f-a2283b9cd03c: {kubelet ostest-5xqm8-worker-0-twrlr} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(01362b0c9088dfd665a3a094a09cfec7659db7752a3c3b2744043b1ff726bf33): netplugin failed: "2020/09/09 08:23:56 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-downward-api-2833;K8S_POD_NAME=downward-api-61172db3-3d96-4583-904f-a2283b9cd03c;K8S_POD_INFRA_CONTAINER_ID=01362b0c9088dfd665a3a094a09cfec7659db7752a3c3b2744043b1ff726bf33, CNI_NETNS=/var/run/netns/b180b462-d971-40d4-9f0c-5880faca044e).\n"
Sep  9 04:29:33.298: INFO: At 2020-09-09 04:27:30 -0400 EDT - event for downward-api-61172db3-3d96-4583-904f-a2283b9cd03c: {kubelet ostest-5xqm8-worker-0-twrlr} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(71f00f3bdb1bce436bf14c447e476e3745d8a6a3b81f6e6b7c29b33184762f43): [e2e-downward-api-2833/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:29:33.298: INFO: At 2020-09-09 04:27:51 -0400 EDT - event for downward-api-61172db3-3d96-4583-904f-a2283b9cd03c: {kubelet ostest-5xqm8-worker-0-twrlr} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(12bacbe140ecd14293513c24692245293a2f90976719fee6cc983834bf63a75b): [e2e-downward-api-2833/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:29:33.298: INFO: At 2020-09-09 04:28:13 -0400 EDT - event for downward-api-61172db3-3d96-4583-904f-a2283b9cd03c: {kubelet ostest-5xqm8-worker-0-twrlr} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(7979bd224059470b42ca006356ab4a0fddcd64d29243237db534f1d9a15d2b61): [e2e-downward-api-2833/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:29:33.298: INFO: At 2020-09-09 04:28:39 -0400 EDT - event for downward-api-61172db3-3d96-4583-904f-a2283b9cd03c: {kubelet ostest-5xqm8-worker-0-twrlr} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(5a9ac2a49d52287069dfb4e56772b4309833025a87034d9d9525f333058597e2): [e2e-downward-api-2833/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:29:33.298: INFO: At 2020-09-09 04:29:03 -0400 EDT - event for downward-api-61172db3-3d96-4583-904f-a2283b9cd03c: {kubelet ostest-5xqm8-worker-0-twrlr} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(b6d2c4b189a38179d5b59e5598f1ef4512a23399188070400ba217fed054ed49): [e2e-downward-api-2833/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:29:33.298: INFO: At 2020-09-09 04:29:27 -0400 EDT - event for downward-api-61172db3-3d96-4583-904f-a2283b9cd03c: {kubelet ostest-5xqm8-worker-0-twrlr} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(6019f12c95e63a1856b7a3c8d2b28f8af9957f129356c89d19fad546450feb08): [e2e-downward-api-2833/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:29:33.319: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep  9 04:29:33.319: INFO: 
Sep  9 04:29:33.340: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:29:33.340: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-downward-api-2833" for this suite.
Sep  9 04:29:33.411: INFO: Running AfterSuite actions on all nodes
Sep  9 04:29:33.411: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/framework/util.go:715]: Unexpected error:
    <*errors.errorString | 0xc0017ae860>: {
        s: "expected pod \"downward-api-61172db3-3d96-4583-904f-a2283b9cd03c\" success: Gave up after waiting 5m0s for pod \"downward-api-61172db3-3d96-4583-904f-a2283b9cd03c\" to be \"Succeeded or Failed\"",
    }
    expected pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c" success: Gave up after waiting 5m0s for pod "downward-api-61172db3-3d96-4583-904f-a2283b9cd03c" to be "Succeeded or Failed"
occurred

Stderr
[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_should_deny_crd_creation_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 276.0s

[k8s.io]_Variable_Expansion_should_fail_substituting_values_in_a_volume_subpath_with_backticks_[sig-storage][Slow]_[Conformance]_[Suite:k8s]
e2e_tests
Time Taken: 199.0s

[sig-storage]_Projected_configMap_should_be_consumable_from_pods_in_volume_with_mappings_and_Item_mode_set_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 280.0s

[sig-auth]_Certificates_API_[Privileged:ClusterAdmin]_should_support_CSR_API_operations_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 5.2s

[sig-api-machinery]_CustomResourceDefinition_resources_[Privileged:ClusterAdmin]_Simple_CustomResourceDefinition_listing_custom_resource_definition_objects_works__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 9.9s

[sig-api-machinery]_Events_should_ensure_that_an_event_can_be_fetched,_patched,_deleted,_and_listed_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.8s

[sig-api-machinery]_ResourceQuota_should_verify_ResourceQuota_with_terminating_scopes._[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 18.1s

[sig-api-machinery]_Garbage_collector_should_orphan_RS_created_by_deployment_when_deleteOptions.PropagationPolicy_is_Orphan_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 2.4s

[k8s.io]_Pods_should_allow_activeDeadlineSeconds_to_be_updated_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 98.0s

[sig-cli]_Kubectl_client_Guestbook_application_should_create_and_stop_a_working_application__[Conformance]_[Slow]_[Suite:k8s]
e2e_tests
Time Taken: 161.0s

[sig-api-machinery]_Watchers_should_be_able_to_start_watching_from_a_specific_resource_version_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.4s

[sig-storage]_Downward_API_volume_should_provide_podname_only_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 184.0s

[k8s.io]_Probing_container_with_readiness_probe_should_not_be_ready_before_initial_delay_and_never_restart_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 61.0s

[sig-storage]_Subpath_Atomic_writer_volumes_should_support_subpaths_with_configmap_pod_with_mountPath_of_existing_file_[LinuxOnly]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 77.0s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_listing_validating_webhooks_should_work_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 43.4s

[sig-api-machinery]_Secrets_should_be_consumable_from_pods_in_env_vars_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 68.0s

[k8s.io]_Pods_should_be_submitted_and_removed_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 58.3s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_should_unconditionally_reject_operations_on_fail_closed_webhook_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 39.5s

[sig-storage]_Secrets_should_be_consumable_from_pods_in_volume_with_mappings_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 45.8s

[sig-storage]_Downward_API_volume_should_set_mode_on_item_file_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 40.4s

[sig-storage]_ConfigMap_updates_should_be_reflected_in_volume_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 36.4s

[sig-storage]_Projected_downwardAPI_should_provide_container's_memory_request_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 38.3s

[sig-api-machinery]_Namespaces_[Serial]_should_ensure_that_all_pods_are_removed_when_a_namespace_is_deleted_[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 82.0s

[k8s.io]_Docker_Containers_should_be_able_to_override_the_image's_default_arguments_(docker_cmd)_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 56.0s

[sig-network]_DNS_should_provide_DNS_for_pods_for_Subdomain_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 41.9s

[sig-storage]_ConfigMap_should_be_consumable_from_pods_in_volume_as_non-root_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 58.5s

[sig-scheduling]_SchedulerPredicates_[Serial]_validates_that_NodeSelector_is_respected_if_not_matching__[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 2.5s

[sig-storage]_ConfigMap_optional_updates_should_be_reflected_in_volume_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 47.5s

[k8s.io]_Variable_Expansion_should_verify_that_a_failing_subpath_expansion_can_be_modified_during_the_lifecycle_of_a_container_[sig-storage][Slow]_[Conformance]_[Suite:k8s]
e2e_tests
Time Taken: 166.0s

[sig-apps]_Job_should_run_a_job_to_completion_when_tasks_sometimes_fail_and_are_locally_restarted_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 57.5s

[sig-storage]_EmptyDir_volumes_should_support_(non-root,0644,default)_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 54.9s

[sig-storage]_EmptyDir_volumes_should_support_(root,0666,tmpfs)_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 39.9s

[sig-node]_ConfigMap_should_be_consumable_via_environment_variable_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 51.8s

[sig-api-machinery]_ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_replica_set._[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 12.4s

[sig-node]_Downward_API_should_provide_default_limits.cpu/memory_from_node_allocatable_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 43.6s

[sig-network]_Service_endpoints_latency_should_not_be_very_high__[Conformance]_[Serial]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 33.3s

[k8s.io]_Variable_Expansion_should_allow_composing_env_vars_into_new_env_vars_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 56.4s

[sig-storage]_Projected_configMap_should_be_consumable_from_pods_in_volume_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 43.8s

[sig-storage]_Projected_configMap_updates_should_be_reflected_in_volume_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 136.0s

[k8s.io]_Security_Context_When_creating_a_container_with_runAsUser_should_run_the_container_with_uid_65534_[LinuxOnly]_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 47.5s

[sig-storage]_EmptyDir_volumes_should_support_(non-root,0644,tmpfs)_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 53.8s

[k8s.io]_Lease_lease_API_should_be_available_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.3s

[sig-auth]_ServiceAccounts_should_mount_an_API_token_into_pods__[Conformance]_[Disabled:Broken]_[Suite:k8s]
e2e_tests
Time Taken: 37.4s

[k8s.io]_Container_Lifecycle_Hook_when_create_a_pod_with_lifecycle_hook_should_execute_poststart_http_hook_properly_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 62.0s

[k8s.io]_Probing_container_should_have_monotonically_increasing_restart_count_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 171.0s

[sig-node]_Downward_API_should_provide_pod_UID_as_env_vars_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 48.1s

[sig-api-machinery]_Garbage_collector_should_not_delete_dependents_that_have_both_valid_owner_and_owner_that's_waiting_for_dependents_to_be_deleted_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 56.5s

[sig-apps]_StatefulSet_[k8s.io]_Basic_StatefulSet_functionality_[StatefulSetBasic]_should_perform_canary_updates_and_phased_rolling_updates_of_template_modifications_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 612.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:58]: Sep  9 04:28:07.336: Failed waiting for pods to enter running: timed out waiting for the condition

Stdout
I0909 04:18:06.360703  811234 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:18:06.410: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:18:06.438: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:18:06.527: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:18:06.527: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:18:06.527: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:18:06.558: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:18:06.563: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:18:06.589: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [sig-apps] StatefulSet
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename statefulset
Sep  9 04:18:06.872: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:18:07.222: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  @/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  @/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103
STEP: Creating service test in namespace e2e-statefulset-4657
[It] should perform canary updates and phased rolling updates of template modifications [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a new StatefulSet
Sep  9 04:18:07.313: INFO: Found 0 stateful pods, waiting for 3
Sep  9 04:18:17.320: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:18:27.331: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:18:37.353: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:18:47.331: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:18:57.331: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:19:07.355: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:19:17.326: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:19:27.338: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:19:37.332: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:19:47.339: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:19:57.400: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:20:07.332: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:20:17.333: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:20:27.328: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:20:37.330: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:20:47.343: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:20:57.362: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:21:07.346: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:21:17.324: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:21:27.340: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:21:37.348: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:21:47.410: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:21:57.372: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:22:07.333: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:22:17.322: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:22:27.359: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:22:37.325: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:22:47.326: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:22:57.379: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:23:07.350: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:23:17.444: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:23:27.331: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:23:37.349: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:23:47.340: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:23:57.371: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:24:07.324: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:24:17.339: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:24:27.461: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:24:37.380: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:24:47.455: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:24:57.332: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:25:07.324: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:25:17.362: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:25:27.322: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:25:37.331: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:25:47.325: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:25:57.621: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:26:07.328: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:26:17.343: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:26:27.326: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:26:37.328: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:26:47.406: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:26:57.339: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:27:07.341: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:27:17.344: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:27:27.329: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:27:37.420: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:27:47.329: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:27:57.337: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:28:07.323: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:28:07.335: INFO: Found 1 stateful pods, waiting for 3
Sep  9 04:28:07.336: FAIL: Failed waiting for pods to enter running: timed out waiting for the condition

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning(0x6b71de0, 0xc001254000, 0x300000003, 0xc000bda000)
	@/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:58 +0x10e
k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...)
	@/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80
k8s.io/kubernetes/test/e2e/apps.glob..func10.2.8()
	@/k8s.io/kubernetes/test/e2e/apps/statefulset.go:323 +0x2ce
github.com/openshift/origin/pkg/test/ginkgo.(*TestOptions).Run(0xc00157be60, 0xc000e034e0, 0x1, 0x1, 0x0, 0x22442a0)
	github.com/openshift/origin@/pkg/test/ginkgo/cmd_runtest.go:61 +0x41f
main.newRunTestCommand.func1.1()
	github.com/openshift/origin@/cmd/openshift-tests/openshift-tests.go:239 +0x4e
github.com/openshift/origin/test/extended/util.WithCleanup(0xc001b8bbd8)
	github.com/openshift/origin@/test/extended/util/test.go:167 +0x58
main.newRunTestCommand.func1(0xc000b3e780, 0xc000e034e0, 0x1, 0x1, 0x0, 0x0)
	github.com/openshift/origin@/cmd/openshift-tests/openshift-tests.go:239 +0x1be
github.com/spf13/cobra.(*Command).execute(0xc000b3e780, 0xc000e034a0, 0x1, 0x1, 0xc000b3e780, 0xc000e034a0)
	@/github.com/spf13/cobra/command.go:826 +0x460
github.com/spf13/cobra.(*Command).ExecuteC(0xc0007c4f00, 0x0, 0x696bee0, 0x9eaaea8)
	@/github.com/spf13/cobra/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
	@/github.com/spf13/cobra/command.go:864
main.main.func1(0xc0007c4f00, 0x0, 0x0)
	github.com/openshift/origin@/cmd/openshift-tests/openshift-tests.go:61 +0x9c
main.main()
	github.com/openshift/origin@/cmd/openshift-tests/openshift-tests.go:62 +0x36e
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  @/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
Sep  9 04:28:07.345: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config describe po ss2-0 --namespace=e2e-statefulset-4657'
Sep  9 04:28:07.685: INFO: stderr: ""
Sep  9 04:28:07.685: INFO: stdout: "Name:           ss2-0\nNamespace:      e2e-statefulset-4657\nPriority:       0\nNode:           ostest-5xqm8-worker-0-cbbx9/10.196.2.198\nStart Time:     Wed, 09 Sep 2020 04:18:07 -0400\nLabels:         baz=blah\n                controller-revision-hash=ss2-65c7964b94\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations:    openshift.io/scc: anyuid\nStatus:         Pending\nIP:             \nIPs:            <none>\nControlled By:  StatefulSet/ss2\nContainers:\n  webserver:\n    Container ID:   \n    Image:          docker.io/library/httpd:2.4.38-alpine\n    Image ID:       \n    Port:           <none>\n    Host Port:      <none>\n    State:          Waiting\n      Reason:       ContainerCreating\n    Ready:          False\n    Restart Count:  0\n    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4fccg (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             False \n  ContainersReady   False \n  PodScheduled      True \nVolumes:\n  default-token-4fccg:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-4fccg\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type     Reason                  Age    From                                  Message\n  ----     ------                  ----   ----                                  -------\n  Normal   Scheduled               10m                                          Successfully assigned e2e-statefulset-4657/ss2-0 to ostest-5xqm8-worker-0-cbbx9\n  Warning  FailedCreatePodSandBox  9m26s  kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(16d02ebf056a33760686eceed4f69f28e6d33644015053b1bce2aa76a8403339): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network \"kuryr\": CNI Daemon returned error 500\n  Warning  FailedCreatePodSandBox  9m     kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(2716f8b0dfa581797798fafa468c8771504ea7ab4ba5155af3c4c2db3e619f5c): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network \"kuryr\": CNI Daemon returned error 500\n  Warning  FailedCreatePodSandBox  8m37s  kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(e7c0aab95eb4a5eaca32be13f541b9d62eac6ae918a1eb8f6656cae87c3efd69): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network \"kuryr\": CNI Daemon returned error 500\n  Warning  FailedCreatePodSandBox  8m15s  kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(5413162bd572ecc081ddec2379f75de756c5e8e4c3c8070cb4498b739bd4f56d): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network \"kuryr\": CNI Daemon returned error 500\n  Warning  FailedCreatePodSandBox  7m52s  kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(8d6e9e7eb17c5df3a4688342bd71da637edc472805bfcc20be99dcf53b69bc20): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network \"kuryr\": CNI Daemon returned error 500\n  Warning  FailedCreatePodSandBox  7m29s  kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(31cf8915ed6ee7c5442700981f007c88956880636ca89fd3bcd3d7033faa8b7f): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network \"kuryr\": CNI Daemon returned error 500\n  Warning  FailedCreatePodSandBox  7m3s   kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(708300cfd36258474aa2ace00b9068b6e46cf031e81d0b295359c547dcc839d2): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network \"kuryr\": CNI Daemon returned error 500\n  Warning  FailedCreatePodSandBox  6m40s  kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(df9294c7310b2c5ad64b3b8edb7f9164cfef15785e3a5ed22a2d435796fbd438): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network \"kuryr\": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2 Final//EN\">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n  Warning  FailedCreatePodSandBox  6m14s                kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(d9bacfddcff63259197239393e19a25c0a868bd27841c6bcfe082382a03ce0f6): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network \"kuryr\": CNI Daemon returned error 500\n  Warning  FailedCreatePodSandBox  0s (x15 over 5m51s)  kubelet, ostest-5xqm8-worker-0-cbbx9  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(eea80d321f96241afd8ad563d94d05340e6ec0f84c11a83a2fc3beadacb42ece): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network \"kuryr\": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2 Final//EN\">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n"
Sep  9 04:28:07.685: INFO: 
Output of kubectl describe ss2-0:
Name:           ss2-0
Namespace:      e2e-statefulset-4657
Priority:       0
Node:           ostest-5xqm8-worker-0-cbbx9/10.196.2.198
Start Time:     Wed, 09 Sep 2020 04:18:07 -0400
Labels:         baz=blah
                controller-revision-hash=ss2-65c7964b94
                foo=bar
                statefulset.kubernetes.io/pod-name=ss2-0
Annotations:    openshift.io/scc: anyuid
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/ss2
Containers:
  webserver:
    Container ID:   
    Image:          docker.io/library/httpd:2.4.38-alpine
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4fccg (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-4fccg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4fccg
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age    From                                  Message
  ----     ------                  ----   ----                                  -------
  Normal   Scheduled               10m                                          Successfully assigned e2e-statefulset-4657/ss2-0 to ostest-5xqm8-worker-0-cbbx9
  Warning  FailedCreatePodSandBox  9m26s  kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(16d02ebf056a33760686eceed4f69f28e6d33644015053b1bce2aa76a8403339): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500
  Warning  FailedCreatePodSandBox  9m     kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(2716f8b0dfa581797798fafa468c8771504ea7ab4ba5155af3c4c2db3e619f5c): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500
  Warning  FailedCreatePodSandBox  8m37s  kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(e7c0aab95eb4a5eaca32be13f541b9d62eac6ae918a1eb8f6656cae87c3efd69): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500
  Warning  FailedCreatePodSandBox  8m15s  kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(5413162bd572ecc081ddec2379f75de756c5e8e4c3c8070cb4498b739bd4f56d): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500
  Warning  FailedCreatePodSandBox  7m52s  kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(8d6e9e7eb17c5df3a4688342bd71da637edc472805bfcc20be99dcf53b69bc20): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500
  Warning  FailedCreatePodSandBox  7m29s  kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(31cf8915ed6ee7c5442700981f007c88956880636ca89fd3bcd3d7033faa8b7f): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500
  Warning  FailedCreatePodSandBox  7m3s   kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(708300cfd36258474aa2ace00b9068b6e46cf031e81d0b295359c547dcc839d2): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500
  Warning  FailedCreatePodSandBox  6m40s  kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(df9294c7310b2c5ad64b3b8edb7f9164cfef15785e3a5ed22a2d435796fbd438): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
  Warning  FailedCreatePodSandBox  6m14s                kubelet, ostest-5xqm8-worker-0-cbbx9  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(d9bacfddcff63259197239393e19a25c0a868bd27841c6bcfe082382a03ce0f6): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500
  Warning  FailedCreatePodSandBox  0s (x15 over 5m51s)  kubelet, ostest-5xqm8-worker-0-cbbx9  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(eea80d321f96241afd8ad563d94d05340e6ec0f84c11a83a2fc3beadacb42ece): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>

Sep  9 04:28:07.686: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config logs ss2-0 --namespace=e2e-statefulset-4657 --tail=100'
Sep  9 04:28:07.942: INFO: rc: 1
Sep  9 04:28:07.942: INFO: 
Last 100 log lines of ss2-0:

Sep  9 04:28:07.942: INFO: Deleting all statefulset in ns e2e-statefulset-4657
Sep  9 04:28:07.952: INFO: Scaling statefulset ss2 to 0
Sep  9 04:28:18.023: INFO: Waiting for statefulset status.replicas updated to 0
Sep  9 04:28:18.040: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-statefulset-4657".
STEP: Found 13 events.
Sep  9 04:28:18.140: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-0: { } Scheduled: Successfully assigned e2e-statefulset-4657/ss2-0 to ostest-5xqm8-worker-0-cbbx9
Sep  9 04:28:18.140: INFO: At 2020-09-09 04:18:07 -0400 EDT - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful
Sep  9 04:28:18.140: INFO: At 2020-09-09 04:18:41 -0400 EDT - event for ss2-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(16d02ebf056a33760686eceed4f69f28e6d33644015053b1bce2aa76a8403339): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:28:18.140: INFO: At 2020-09-09 04:19:07 -0400 EDT - event for ss2-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(2716f8b0dfa581797798fafa468c8771504ea7ab4ba5155af3c4c2db3e619f5c): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:28:18.140: INFO: At 2020-09-09 04:19:30 -0400 EDT - event for ss2-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(e7c0aab95eb4a5eaca32be13f541b9d62eac6ae918a1eb8f6656cae87c3efd69): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:28:18.140: INFO: At 2020-09-09 04:19:52 -0400 EDT - event for ss2-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(5413162bd572ecc081ddec2379f75de756c5e8e4c3c8070cb4498b739bd4f56d): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:28:18.140: INFO: At 2020-09-09 04:20:15 -0400 EDT - event for ss2-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(8d6e9e7eb17c5df3a4688342bd71da637edc472805bfcc20be99dcf53b69bc20): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:28:18.140: INFO: At 2020-09-09 04:20:38 -0400 EDT - event for ss2-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(31cf8915ed6ee7c5442700981f007c88956880636ca89fd3bcd3d7033faa8b7f): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:28:18.140: INFO: At 2020-09-09 04:21:04 -0400 EDT - event for ss2-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(708300cfd36258474aa2ace00b9068b6e46cf031e81d0b295359c547dcc839d2): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:28:18.140: INFO: At 2020-09-09 04:21:27 -0400 EDT - event for ss2-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(df9294c7310b2c5ad64b3b8edb7f9164cfef15785e3a5ed22a2d435796fbd438): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>

Sep  9 04:28:18.140: INFO: At 2020-09-09 04:21:53 -0400 EDT - event for ss2-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(d9bacfddcff63259197239393e19a25c0a868bd27841c6bcfe082382a03ce0f6): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:28:18.140: INFO: At 2020-09-09 04:22:16 -0400 EDT - event for ss2-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(eea80d321f96241afd8ad563d94d05340e6ec0f84c11a83a2fc3beadacb42ece): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>

Sep  9 04:28:18.140: INFO: At 2020-09-09 04:28:08 -0400 EDT - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-0 in StatefulSet ss2 successful
Sep  9 04:28:18.149: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep  9 04:28:18.149: INFO: 
Sep  9 04:28:18.167: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:28:18.167: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-statefulset-4657" for this suite.
Sep  9 04:28:18.217: INFO: Running AfterSuite actions on all nodes
Sep  9 04:28:18.217: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:58]: Sep  9 04:28:07.336: Failed waiting for pods to enter running: timed out waiting for the condition

Stderr
[sig-cli]_Kubectl_client_Kubectl_expose_should_create_services_for_rc__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 81.0s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_should_mutate_custom_resource_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 305.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:846]: waiting for the deployment status valid%!(EXTRA string=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, string=sample-webhook-deployment, string=e2e-webhook-1226)
Unexpected error:
    <*errors.errorString | 0xc000dec920>: {
        s: "error waiting for deployment \"sample-webhook-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-webhook-deployment-7bc8486f8c\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
occurred

Stdout
I0909 04:16:52.071462  805203 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:16:52.137: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:16:52.175: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:16:52.271: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:16:52.271: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:16:52.271: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:16:52.290: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:16:52.292: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:16:52.329: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename webhook
Sep  9 04:16:52.815: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:16:53.077: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  @/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
Sep  9 04:16:54.103: INFO: role binding webhook-auth-reader already exists
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Sep  9 04:16:54.166: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
Sep  9 04:16:56.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:16:58.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:00.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:02.208: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:04.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:06.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:08.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:10.208: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:12.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:14.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:16.208: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:18.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:20.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:22.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:24.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:26.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:28.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:30.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:32.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:34.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:36.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:38.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:40.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:42.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:44.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:46.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:48.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:50.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:52.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:54.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:56.226: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:17:58.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:00.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:02.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:04.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:06.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:08.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:10.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:12.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:14.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:16.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:18.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:20.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:22.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:24.226: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:26.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:28.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:30.305: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:32.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:34.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:36.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:38.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:40.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:42.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:44.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:46.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:48.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:50.227: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:52.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:54.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:56.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:18:58.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:00.237: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:02.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:04.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:06.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:08.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:10.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:12.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:14.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:16.225: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:18.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:20.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:22.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:24.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:26.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:28.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:30.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:32.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:34.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:36.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:38.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:40.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:42.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:44.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:46.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:48.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:50.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:52.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:54.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:56.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:19:58.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:00.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:02.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:04.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:06.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:08.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:10.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:12.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:14.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:16.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:18.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:20.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:22.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:24.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:26.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:28.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:30.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:32.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:34.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:36.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:38.208: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:40.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:42.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:44.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:46.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:48.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:50.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:52.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:54.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:56.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:20:58.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:00.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:02.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:04.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:06.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:08.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:10.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:12.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:14.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:16.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:18.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:20.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:22.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:24.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:26.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:28.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:30.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:32.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:34.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:36.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:38.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:40.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:42.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:44.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:46.245: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:48.227: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:50.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:52.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:54.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:56.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 04:21:56.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-webhook-1226".
STEP: Found 10 events.
Sep  9 04:21:56.240: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-7bc8486f8c-mrn7z: { } Scheduled: Successfully assigned e2e-webhook-1226/sample-webhook-deployment-7bc8486f8c-mrn7z to ostest-5xqm8-worker-0-rzx47
Sep  9 04:21:56.240: INFO: At 2020-09-09 04:16:54 -0400 EDT - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep  9 04:21:56.240: INFO: At 2020-09-09 04:16:54 -0400 EDT - event for sample-webhook-deployment-7bc8486f8c: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-7bc8486f8c-mrn7z
Sep  9 04:21:56.240: INFO: At 2020-09-09 04:16:55 -0400 EDT - event for sample-webhook-deployment-7bc8486f8c-mrn7z: {kubelet ostest-5xqm8-worker-0-rzx47} FailedMount: MountVolume.SetUp failed for volume "default-token-j9rz4" : failed to sync secret cache: timed out waiting for the condition
Sep  9 04:21:56.240: INFO: At 2020-09-09 04:19:46 -0400 EDT - event for sample-webhook-deployment-7bc8486f8c-mrn7z: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-mrn7z_e2e-webhook-1226_9fc488ac-b549-46b8-96c7-f559feac9331_0(ad04963ce8214ad662085788e7b2a00c346346598f3a3602541282e4f1b64e5b): netplugin failed: "2020/09/09 08:16:56 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-webhook-1226;K8S_POD_NAME=sample-webhook-deployment-7bc8486f8c-mrn7z;K8S_POD_INFRA_CONTAINER_ID=ad04963ce8214ad662085788e7b2a00c346346598f3a3602541282e4f1b64e5b, CNI_NETNS=/var/run/netns/8d5a9607-d238-4096-9204-ed13e41786bf).\n"
Sep  9 04:21:56.240: INFO: At 2020-09-09 04:20:08 -0400 EDT - event for sample-webhook-deployment-7bc8486f8c-mrn7z: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-mrn7z_e2e-webhook-1226_9fc488ac-b549-46b8-96c7-f559feac9331_0(6f96671abd1a8dd6f1904b7e3c129bec385fdf72aa2ffe197da029f9803c6e8b): [e2e-webhook-1226/sample-webhook-deployment-7bc8486f8c-mrn7z:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:21:56.240: INFO: At 2020-09-09 04:20:31 -0400 EDT - event for sample-webhook-deployment-7bc8486f8c-mrn7z: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-mrn7z_e2e-webhook-1226_9fc488ac-b549-46b8-96c7-f559feac9331_0(45ec66d8cbae190f2a4906b722e79c7eccb06d31bb436195baf9f5f7580ac52d): [e2e-webhook-1226/sample-webhook-deployment-7bc8486f8c-mrn7z:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:21:56.240: INFO: At 2020-09-09 04:20:53 -0400 EDT - event for sample-webhook-deployment-7bc8486f8c-mrn7z: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-mrn7z_e2e-webhook-1226_9fc488ac-b549-46b8-96c7-f559feac9331_0(28b91d6decb6b7d98f51c757049335fe4c9ead8c238bc148268fb3ff367b1d6a): [e2e-webhook-1226/sample-webhook-deployment-7bc8486f8c-mrn7z:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:21:56.240: INFO: At 2020-09-09 04:21:17 -0400 EDT - event for sample-webhook-deployment-7bc8486f8c-mrn7z: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-mrn7z_e2e-webhook-1226_9fc488ac-b549-46b8-96c7-f559feac9331_0(34f9190fa86af3e22104bd36b03523a57a8532b7f76497a8e01f89c0cc995164): [e2e-webhook-1226/sample-webhook-deployment-7bc8486f8c-mrn7z:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:21:56.240: INFO: At 2020-09-09 04:21:43 -0400 EDT - event for sample-webhook-deployment-7bc8486f8c-mrn7z: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-mrn7z_e2e-webhook-1226_9fc488ac-b549-46b8-96c7-f559feac9331_0(1971224325c4fa848dfc476629a55b008d559ee0f49066ea1ca7a94a983d655e): [e2e-webhook-1226/sample-webhook-deployment-7bc8486f8c-mrn7z:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:21:56.253: INFO: POD                                         NODE                         PHASE    GRACE  CONDITIONS
Sep  9 04:21:56.253: INFO: sample-webhook-deployment-7bc8486f8c-mrn7z  ostest-5xqm8-worker-0-rzx47  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:16:54 -0400 EDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:16:54 -0400 EDT ContainersNotReady containers with unready status: [sample-webhook]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:16:54 -0400 EDT ContainersNotReady containers with unready status: [sample-webhook]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:16:54 -0400 EDT  }]
Sep  9 04:21:56.253: INFO: 
Sep  9 04:21:56.286: INFO: unable to fetch logs for pods: sample-webhook-deployment-7bc8486f8c-mrn7z[e2e-webhook-1226].container[sample-webhook].error=the server rejected our request for an unknown reason (get pods sample-webhook-deployment-7bc8486f8c-mrn7z)
Sep  9 04:21:56.306: INFO: skipping dumping cluster info - cluster too large
STEP: Collecting events from namespace "e2e-webhook-1226-markers".
STEP: Found 0 events.
Sep  9 04:21:56.352: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep  9 04:21:56.352: INFO: 
Sep  9 04:21:56.378: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:21:56.379: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-webhook-1226" for this suite.
STEP: Destroying namespace "e2e-webhook-1226-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  @/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
Sep  9 04:21:56.623: INFO: Running AfterSuite actions on all nodes
Sep  9 04:21:56.623: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:846]: waiting for the deployment status valid%!(EXTRA string=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, string=sample-webhook-deployment, string=e2e-webhook-1226)
Unexpected error:
    <*errors.errorString | 0xc000dec920>: {
        s: "error waiting for deployment \"sample-webhook-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-webhook-deployment-7bc8486f8c\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735236214, loc:(*time.Location)(0x9e8b2c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7bc8486f8c\" is progressing."}}, CollisionCount:(*int32)(nil)}
occurred

Stderr
[k8s.io]_Variable_Expansion_should_allow_substituting_values_in_a_volume_subpath_[sig-storage]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 304.0s

[sig-apps]_StatefulSet_[k8s.io]_Basic_StatefulSet_functionality_[StatefulSetBasic]_should_perform_rolling_updates_and_roll_backs_of_template_modifications_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 705.0s

[sig-storage]_Secrets_optional_updates_should_be_reflected_in_volume_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 70.0s

[k8s.io]_Kubelet_when_scheduling_a_busybox_command_that_always_fails_in_a_pod_should_be_possible_to_delete_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.3s

[sig-storage]_Downward_API_volume_should_provide_node_allocatable_(memory)_as_default_memory_limit_if_the_limit_is_not_set_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 144.0s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_should_be_able_to_deny_attaching_pod_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 152.0s

[k8s.io]_Variable_Expansion_should_fail_substituting_values_in_a_volume_subpath_with_absolute_path_[sig-storage][Slow]_[Conformance]_[Suite:k8s]
e2e_tests
Time Taken: 127.0s

[sig-api-machinery]_Watchers_should_observe_add,_update,_and_delete_watch_notifications_on_configmaps_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 62.0s

[sig-node]_Downward_API_should_provide_pod_name,_namespace_and_IP_address_as_env_vars_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 50.1s

[sig-api-machinery]_CustomResourcePublishOpenAPI_[Privileged:ClusterAdmin]_works_for_multiple_CRDs_of_same_group_but_different_versions_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 277.0s

[sig-scheduling]_SchedulerPreemption_[Serial]_validates_basic_preemption_works_[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 94.0s

[k8s.io]_Container_Runtime_blackbox_test_on_terminated_container_should_report_termination_message_[LinuxOnly]_from_file_when_pod_succeeds_and_TerminationMessagePolicy_FallbackToLogsOnError_is_set_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 50.4s

[sig-storage]_EmptyDir_wrapper_volumes_should_not_conflict_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 44.0s

[sig-api-machinery]_CustomResourcePublishOpenAPI_[Privileged:ClusterAdmin]_works_for_CRD_with_validation_schema_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 105.0s

[k8s.io]_[sig-node]_NoExecuteTaintManager_Single_Pod_[Serial]_removing_taint_cancels_eviction_[Disruptive]_[Conformance]_[Suite:k8s]
e2e_tests
Time Taken: 137.0s

[sig-storage]_Projected_secret_should_be_consumable_in_multiple_volumes_in_a_pod_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 67.0s

[sig-network]_Services_should_have_session_affinity_work_for_service_with_type_clusterIP_[LinuxOnly]_[Conformance]_[Skipped:Network/OVNKubernetes]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 207.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/network/service.go:3380]: Unexpected error:
    <*errors.errorString | 0xc0015a2130>: {
        s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
occurred

Stdout
I0909 04:15:06.123325  796633 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:15:06.174: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:15:06.251: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:15:06.392: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:15:06.392: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:15:06.392: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:15:06.490: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:15:06.500: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:15:06.536: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename services
Sep  9 04:15:06.943: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:15:07.270: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/network/service.go:731
[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating service in namespace e2e-services-7427
STEP: creating service affinity-clusterip in namespace e2e-services-7427
STEP: creating replication controller affinity-clusterip in namespace e2e-services-7427
I0909 04:15:07.369849  796633 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: e2e-services-7427, replica count: 3
I0909 04:15:10.425415  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:13.425635  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:16.425918  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:19.426099  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:22.426309  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:25.426549  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:28.427002  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:31.427224  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:34.427477  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:37.427716  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:40.427962  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:43.428217  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:46.428459  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:49.428683  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:52.428918  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:15:55.429201  796633 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep  9 04:15:55.464: INFO: Creating new exec pod
Sep  9 04:16:12.645: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:15.220: INFO: rc: 1
Sep  9 04:16:15.220: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:16.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:18.742: INFO: rc: 1
Sep  9 04:16:18.742: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:19.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:21.718: INFO: rc: 1
Sep  9 04:16:21.718: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:22.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:24.804: INFO: rc: 1
Sep  9 04:16:24.804: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:25.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:27.768: INFO: rc: 1
Sep  9 04:16:27.768: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:28.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:30.729: INFO: rc: 1
Sep  9 04:16:30.729: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:31.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:33.832: INFO: rc: 1
Sep  9 04:16:33.832: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:34.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:36.863: INFO: rc: 1
Sep  9 04:16:36.863: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:37.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:39.958: INFO: rc: 1
Sep  9 04:16:39.958: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:40.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:42.821: INFO: rc: 1
Sep  9 04:16:42.821: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:43.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:45.715: INFO: rc: 1
Sep  9 04:16:45.715: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:46.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:48.729: INFO: rc: 1
Sep  9 04:16:48.729: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:49.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:51.712: INFO: rc: 1
Sep  9 04:16:51.712: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:52.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:55.012: INFO: rc: 1
Sep  9 04:16:55.012: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:55.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:16:57.769: INFO: rc: 1
Sep  9 04:16:57.769: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:16:58.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:00.883: INFO: rc: 1
Sep  9 04:17:00.883: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:01.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:03.774: INFO: rc: 1
Sep  9 04:17:03.774: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:04.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:06.838: INFO: rc: 1
Sep  9 04:17:06.838: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:07.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:09.813: INFO: rc: 1
Sep  9 04:17:09.813: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:10.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:12.645: INFO: rc: 1
Sep  9 04:17:12.645: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:13.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:15.643: INFO: rc: 1
Sep  9 04:17:15.643: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:16.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:19.051: INFO: rc: 1
Sep  9 04:17:19.051: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:19.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:21.738: INFO: rc: 1
Sep  9 04:17:21.738: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:22.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:24.821: INFO: rc: 1
Sep  9 04:17:24.821: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:25.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:27.719: INFO: rc: 1
Sep  9 04:17:27.719: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:28.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:30.716: INFO: rc: 1
Sep  9 04:17:30.716: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:31.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:33.731: INFO: rc: 1
Sep  9 04:17:33.731: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:34.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:36.813: INFO: rc: 1
Sep  9 04:17:36.813: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:37.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:39.760: INFO: rc: 1
Sep  9 04:17:39.760: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:40.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:42.651: INFO: rc: 1
Sep  9 04:17:42.651: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:43.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:45.799: INFO: rc: 1
Sep  9 04:17:45.799: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:46.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:48.715: INFO: rc: 1
Sep  9 04:17:48.715: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:49.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:51.671: INFO: rc: 1
Sep  9 04:17:51.671: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:52.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:54.673: INFO: rc: 1
Sep  9 04:17:54.673: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:55.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:17:57.629: INFO: rc: 1
Sep  9 04:17:57.629: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:17:58.224: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:18:00.600: INFO: rc: 1
Sep  9 04:18:00.600: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:18:01.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:18:03.721: INFO: rc: 1
Sep  9 04:18:03.721: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:18:04.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:18:06.778: INFO: rc: 1
Sep  9 04:18:06.778: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:18:07.225: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:18:09.772: INFO: rc: 1
Sep  9 04:18:09.772: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:18:10.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:18:12.721: INFO: rc: 1
Sep  9 04:18:12.721: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:18:13.221: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:18:15.736: INFO: rc: 1
Sep  9 04:18:15.736: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:18:15.736: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Sep  9 04:18:18.237: INFO: rc: 1
Sep  9 04:18:18.237: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-7427 execpod-affinityd4ssh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:18:18.238: INFO: Cleaning up the exec pod
STEP: deleting ReplicationController affinity-clusterip in namespace e2e-services-7427, will wait for the garbage collector to delete the pods
Sep  9 04:18:18.373: INFO: Deleting ReplicationController affinity-clusterip took: 37.827834ms
Sep  9 04:18:18.873: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.479332ms
[AfterEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-services-7427".
STEP: Found 29 events.
Sep  9 04:18:32.586: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-clusterip-p85mp: { } Scheduled: Successfully assigned e2e-services-7427/affinity-clusterip-p85mp to ostest-5xqm8-worker-0-cbbx9
Sep  9 04:18:32.586: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-clusterip-w4qwk: { } Scheduled: Successfully assigned e2e-services-7427/affinity-clusterip-w4qwk to ostest-5xqm8-worker-0-twrlr
Sep  9 04:18:32.586: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-clusterip-zf6sn: { } Scheduled: Successfully assigned e2e-services-7427/affinity-clusterip-zf6sn to ostest-5xqm8-worker-0-rzx47
Sep  9 04:18:32.586: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityd4ssh: { } Scheduled: Successfully assigned e2e-services-7427/execpod-affinityd4ssh to ostest-5xqm8-worker-0-cbbx9
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:07 -0400 EDT - event for affinity-clusterip: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-zf6sn
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:07 -0400 EDT - event for affinity-clusterip: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-w4qwk
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:07 -0400 EDT - event for affinity-clusterip: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-p85mp
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:08 -0400 EDT - event for affinity-clusterip-zf6sn: {kubelet ostest-5xqm8-worker-0-rzx47} FailedMount: MountVolume.SetUp failed for volume "default-token-6lslb" : failed to sync secret cache: timed out waiting for the condition
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:47 -0400 EDT - event for affinity-clusterip-w4qwk: {multus } AddedInterface: Add eth0 [10.128.167.167/23]
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:48 -0400 EDT - event for affinity-clusterip-w4qwk: {kubelet ostest-5xqm8-worker-0-twrlr} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:48 -0400 EDT - event for affinity-clusterip-w4qwk: {kubelet ostest-5xqm8-worker-0-twrlr} Created: Created container affinity-clusterip
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:48 -0400 EDT - event for affinity-clusterip-w4qwk: {kubelet ostest-5xqm8-worker-0-twrlr} Started: Started container affinity-clusterip
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:48 -0400 EDT - event for affinity-clusterip-zf6sn: {multus } AddedInterface: Add eth0 [10.128.166.169/23]
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:49 -0400 EDT - event for affinity-clusterip-zf6sn: {kubelet ostest-5xqm8-worker-0-rzx47} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:49 -0400 EDT - event for affinity-clusterip-zf6sn: {kubelet ostest-5xqm8-worker-0-rzx47} Created: Created container affinity-clusterip
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:49 -0400 EDT - event for affinity-clusterip-zf6sn: {kubelet ostest-5xqm8-worker-0-rzx47} Started: Started container affinity-clusterip
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:50 -0400 EDT - event for affinity-clusterip: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint e2e-services-7427/affinity-clusterip: Operation cannot be fulfilled on endpoints "affinity-clusterip": the object has been modified; please apply your changes to the latest version and try again
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:52 -0400 EDT - event for affinity-clusterip-p85mp: {multus } AddedInterface: Add eth0 [10.128.166.65/23]
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:53 -0400 EDT - event for affinity-clusterip-p85mp: {kubelet ostest-5xqm8-worker-0-cbbx9} Created: Created container affinity-clusterip
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:53 -0400 EDT - event for affinity-clusterip-p85mp: {kubelet ostest-5xqm8-worker-0-cbbx9} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:15:54 -0400 EDT - event for affinity-clusterip-p85mp: {kubelet ostest-5xqm8-worker-0-cbbx9} Started: Started container affinity-clusterip
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:16:08 -0400 EDT - event for execpod-affinityd4ssh: {multus } AddedInterface: Add eth0 [10.128.167.104/23]
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:16:09 -0400 EDT - event for execpod-affinityd4ssh: {kubelet ostest-5xqm8-worker-0-cbbx9} Created: Created container agnhost-pause
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:16:09 -0400 EDT - event for execpod-affinityd4ssh: {kubelet ostest-5xqm8-worker-0-cbbx9} Started: Started container agnhost-pause
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:16:09 -0400 EDT - event for execpod-affinityd4ssh: {kubelet ostest-5xqm8-worker-0-cbbx9} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:18:18 -0400 EDT - event for affinity-clusterip-p85mp: {kubelet ostest-5xqm8-worker-0-cbbx9} Killing: Stopping container affinity-clusterip
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:18:18 -0400 EDT - event for affinity-clusterip-w4qwk: {kubelet ostest-5xqm8-worker-0-twrlr} Killing: Stopping container affinity-clusterip
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:18:18 -0400 EDT - event for affinity-clusterip-zf6sn: {kubelet ostest-5xqm8-worker-0-rzx47} Killing: Stopping container affinity-clusterip
Sep  9 04:18:32.586: INFO: At 2020-09-09 04:18:18 -0400 EDT - event for execpod-affinityd4ssh: {kubelet ostest-5xqm8-worker-0-cbbx9} Killing: Stopping container agnhost-pause
Sep  9 04:18:32.595: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep  9 04:18:32.595: INFO: 
Sep  9 04:18:32.613: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:18:32.614: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-services-7427" for this suite.
[AfterEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/network/service.go:735
Sep  9 04:18:32.659: INFO: Running AfterSuite actions on all nodes
Sep  9 04:18:32.659: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/network/service.go:3380]: Unexpected error:
    <*errors.errorString | 0xc0015a2130>: {
        s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
occurred

Stderr
[sig-cli]_Kubectl_client_Kubectl_api-versions_should_check_if_v1_is_in_available_api_versions__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.9s

[sig-storage]_Projected_secret_should_be_consumable_from_pods_in_volume_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 46.3s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_should_be_able_to_deny_custom_resource_creation,_update_and_deletion_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 35.1s

[sig-scheduling]_SchedulerPreemption_[Serial]_validates_lower_priority_pod_preemption_by_critical_pod_[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 114.0s

[sig-api-machinery]_Garbage_collector_should_orphan_pods_created_by_rc_if_delete_options_say_so_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 41.8s

[sig-network]_IngressClass_API__should_support_creating_IngressClass_API_operations_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 2.1s

[sig-storage]_Projected_configMap_should_be_consumable_in_multiple_volumes_in_the_same_pod_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 33.6s

[sig-api-machinery]_ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_configMap._[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 17.3s

[sig-network]_Services_should_serve_multiport_endpoints_from_pods__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 177.0s

[k8s.io]_Container_Runtime_blackbox_test_when_starting_a_container_that_exits_should_run_with_the_expected_status_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 348.0s

[sig-api-machinery]_CustomResourceDefinition_resources_[Privileged:ClusterAdmin]_custom_resource_defaulting_for_requests_and_from_storage_works__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 3.1s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_patching/updating_a_mutating_webhook_should_work_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 146.0s

[sig-storage]_EmptyDir_volumes_should_support_(root,0644,default)_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 145.0s

[k8s.io]_Kubelet_when_scheduling_a_busybox_command_that_always_fails_in_a_pod_should_have_an_terminated_reason_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 38.7s

[sig-storage]_ConfigMap_should_be_consumable_from_pods_in_volume_with_mappings_and_Item_mode_set_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 42.0s

[sig-cli]_Kubectl_client_Update_Demo_should_scale_a_replication_controller__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 200.0s

[sig-storage]_EmptyDir_volumes_should_support_(root,0777,tmpfs)_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 45.8s

[sig-scheduling]_LimitRange_should_create_a_LimitRange_with_defaults_and_ensure_pod_has_those_defaults_applied._[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 10.9s

[sig-auth]_ServiceAccounts_should_allow_opting_out_of_API_token_automount__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 2.7s

[k8s.io]_Docker_Containers_should_be_able_to_override_the_image's_default_command_and_arguments_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 45.8s

[sig-storage]_Secrets_should_be_consumable_from_pods_in_volume_as_non-root_with_defaultMode_and_fsGroup_set_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 35.9s

[sig-storage]_Projected_downwardAPI_should_set_DefaultMode_on_files_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 33.6s

[k8s.io]_Docker_Containers_should_use_the_image_defaults_if_command_and_args_are_blank_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 51.2s

[sig-storage]_Projected_secret_should_be_consumable_from_pods_in_volume_as_non-root_with_defaultMode_and_fsGroup_set_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 301.0s

[sig-network]_Networking_Granular_Checks:_Pods_should_function_for_node-pod_communication:_http_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 302.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/framework/network/utils.go:705]: Unexpected error:
    <*errors.errorString | 0xc000278850>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

Stdout
I0909 04:09:57.552300  771099 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:09:57.613: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:09:57.653: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:09:57.699: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:09:57.699: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:09:57.699: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:09:57.763: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:09:57.768: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:09:57.791: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [sig-network] Networking
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename pod-network-test
Sep  9 04:09:58.197: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:09:58.532: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Performing setup for networking test in namespace e2e-pod-network-test-3433
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Sep  9 04:09:58.552: INFO: Waiting up to 10m0s for all (but 100) nodes to be schedulable
Sep  9 04:09:58.826: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:00.850: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:02.846: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:04.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:06.842: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:08.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:10.836: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:12.839: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:14.846: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:16.848: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:18.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:20.848: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:22.846: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:24.968: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:26.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:28.839: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:30.836: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:32.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:34.840: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:36.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:38.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:40.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:42.843: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:44.841: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:46.841: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:48.839: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:50.851: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:52.839: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:54.836: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:56.846: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:10:58.835: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:00.836: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:02.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:04.836: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:06.833: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:08.841: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:10.835: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:12.843: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:14.859: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:16.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:18.835: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:20.846: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:22.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:24.836: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:26.843: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:28.833: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:30.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:32.840: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:34.855: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:36.835: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:38.847: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:40.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:42.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:44.839: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:46.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:48.876: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:50.847: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:52.832: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:54.850: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:56.842: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:11:58.840: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:00.864: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:02.841: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:04.853: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:06.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:08.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:10.833: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:12.847: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:14.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:16.851: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:18.852: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:20.836: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:22.839: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:24.970: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:26.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:28.836: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:30.841: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:32.836: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:34.848: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:36.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:38.835: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:40.840: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:42.836: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:44.851: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:46.846: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:48.835: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:50.856: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:52.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:54.849: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:56.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:12:58.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:00.854: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:02.855: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:04.842: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:06.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:08.846: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:10.832: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:12.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:14.841: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:16.835: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:18.841: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:20.882: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:22.839: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:24.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:26.842: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:28.848: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:30.862: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:32.840: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:34.863: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:36.840: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:38.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:40.848: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:42.846: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:44.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:46.853: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:48.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:50.850: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:52.842: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:54.845: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:56.834: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:13:58.848: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:00.848: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:02.836: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:04.842: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:06.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:08.841: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:10.872: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:12.842: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:14.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:16.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:18.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:20.855: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:22.852: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:24.852: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:26.852: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:28.842: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:30.834: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:32.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:34.839: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:36.835: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:38.876: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:40.842: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:42.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:44.849: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:46.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:48.840: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:50.852: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:52.849: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:54.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:56.835: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:58.861: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Sep  9 04:14:58.887: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
[AfterEach] [sig-network] Networking
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-pod-network-test-3433".
STEP: Found 19 events.
Sep  9 04:14:58.906: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned e2e-pod-network-test-3433/netserver-0 to ostest-5xqm8-worker-0-cbbx9
Sep  9 04:14:58.906: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-1: { } Scheduled: Successfully assigned e2e-pod-network-test-3433/netserver-1 to ostest-5xqm8-worker-0-rzx47
Sep  9 04:14:58.906: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-2: { } Scheduled: Successfully assigned e2e-pod-network-test-3433/netserver-2 to ostest-5xqm8-worker-0-twrlr
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:11:48 -0400 EDT - event for netserver-1: {multus } AddedInterface: Add eth0 [10.128.192.78/23]
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:11:48 -0400 EDT - event for netserver-1: {kubelet ostest-5xqm8-worker-0-rzx47} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:11:49 -0400 EDT - event for netserver-1: {kubelet ostest-5xqm8-worker-0-rzx47} Started: Started container webserver
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:11:49 -0400 EDT - event for netserver-1: {kubelet ostest-5xqm8-worker-0-rzx47} Created: Created container webserver
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:12:45 -0400 EDT - event for netserver-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-0_e2e-pod-network-test-3433_dc6bf87d-66b6-4cc7-a08f-40318ab5a3fd_0(6104dc20a31121771aab79970bf8960df7e69580a40969e8d063460083c71ba7): netplugin failed: "2020/09/09 08:09:59 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-pod-network-test-3433;K8S_POD_NAME=netserver-0;K8S_POD_INFRA_CONTAINER_ID=6104dc20a31121771aab79970bf8960df7e69580a40969e8d063460083c71ba7, CNI_NETNS=/var/run/netns/2e8bffc3-91b6-444d-8aac-f1510f1e9857).\n"
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:12:45 -0400 EDT - event for netserver-2: {kubelet ostest-5xqm8-worker-0-twrlr} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-2_e2e-pod-network-test-3433_af29efb3-3196-498d-b1c9-b7a4f9647a33_0(fd22b10eb38a57a7eee931e0a57a34d7c4b507a478fb5b32bbeda971cd9e71d9): netplugin failed: "2020/09/09 08:09:59 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-pod-network-test-3433;K8S_POD_NAME=netserver-2;K8S_POD_INFRA_CONTAINER_ID=fd22b10eb38a57a7eee931e0a57a34d7c4b507a478fb5b32bbeda971cd9e71d9, CNI_NETNS=/var/run/netns/1afc9e48-6949-4ad9-94ab-977cdaff2e07).\n"
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:13:07 -0400 EDT - event for netserver-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-0_e2e-pod-network-test-3433_dc6bf87d-66b6-4cc7-a08f-40318ab5a3fd_0(3cac33b9f2b03a1350292daa6d9db3cb74ca47c2482b8a9fd8cc6caa63ef2685): [e2e-pod-network-test-3433/netserver-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:13:10 -0400 EDT - event for netserver-2: {kubelet ostest-5xqm8-worker-0-twrlr} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-2_e2e-pod-network-test-3433_af29efb3-3196-498d-b1c9-b7a4f9647a33_0(abde792a67b3769218acc50f9b3239ba1b33955eeefe18cbd3e1c6753766f793): [e2e-pod-network-test-3433/netserver-2:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:13:32 -0400 EDT - event for netserver-2: {kubelet ostest-5xqm8-worker-0-twrlr} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-2_e2e-pod-network-test-3433_af29efb3-3196-498d-b1c9-b7a4f9647a33_0(e1d61de5bc87436fb8c334f1e60c2fb48606ebc35f1be63c8e81ae7733eadd40): [e2e-pod-network-test-3433/netserver-2:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:13:33 -0400 EDT - event for netserver-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-0_e2e-pod-network-test-3433_dc6bf87d-66b6-4cc7-a08f-40318ab5a3fd_0(3c9511cca6fc8d33c3b5ed8e87962e5114f8e4e2af5eacdf6dc17b43833dd44d): [e2e-pod-network-test-3433/netserver-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:13:55 -0400 EDT - event for netserver-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-0_e2e-pod-network-test-3433_dc6bf87d-66b6-4cc7-a08f-40318ab5a3fd_0(05835d42804922b8d611cdd92ff2d7529978ce454792f81e2f686f158ceb8c5d): [e2e-pod-network-test-3433/netserver-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:13:56 -0400 EDT - event for netserver-2: {kubelet ostest-5xqm8-worker-0-twrlr} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-2_e2e-pod-network-test-3433_af29efb3-3196-498d-b1c9-b7a4f9647a33_0(4b67c3ab223ae416d50eafb927bdf9ae1c69474e88450e6f6912b02893bbde32): [e2e-pod-network-test-3433/netserver-2:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:14:20 -0400 EDT - event for netserver-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-0_e2e-pod-network-test-3433_dc6bf87d-66b6-4cc7-a08f-40318ab5a3fd_0(bf12c36cbde4a25c70fbcb88a5dc3dfe62a7d49c13eec241426f8462e2dc7f9a): [e2e-pod-network-test-3433/netserver-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:14:20 -0400 EDT - event for netserver-2: {kubelet ostest-5xqm8-worker-0-twrlr} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-2_e2e-pod-network-test-3433_af29efb3-3196-498d-b1c9-b7a4f9647a33_0(4bb0d7f3565d0c22ae15b6c87814eb6c6da52e5cd80bb6c321439b6916f1cd6e): [e2e-pod-network-test-3433/netserver-2:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:58.906: INFO: At 2020-09-09 04:14:45 -0400 EDT - event for netserver-2: {kubelet ostest-5xqm8-worker-0-twrlr} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-2_e2e-pod-network-test-3433_af29efb3-3196-498d-b1c9-b7a4f9647a33_0(6dae1615824cb4feb73bde1fc96ff5bd5d6e147429e57572e09cf34cf54a0693): [e2e-pod-network-test-3433/netserver-2:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>

Sep  9 04:14:58.906: INFO: At 2020-09-09 04:14:47 -0400 EDT - event for netserver-0: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-0_e2e-pod-network-test-3433_dc6bf87d-66b6-4cc7-a08f-40318ab5a3fd_0(3d27693defd790042597dd794ed7dae8e9a4fd95a3950238588c00f2def507d4): [e2e-pod-network-test-3433/netserver-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>

Sep  9 04:14:58.922: INFO: POD          NODE                         PHASE    GRACE  CONDITIONS
Sep  9 04:14:58.922: INFO: netserver-0  ostest-5xqm8-worker-0-cbbx9  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:09:58 -0400 EDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:09:58 -0400 EDT ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:09:58 -0400 EDT ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:09:58 -0400 EDT  }]
Sep  9 04:14:58.922: INFO: netserver-1  ostest-5xqm8-worker-0-rzx47  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:09:58 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:12:00 -0400 EDT  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:12:00 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:09:58 -0400 EDT  }]
Sep  9 04:14:58.922: INFO: netserver-2  ostest-5xqm8-worker-0-twrlr  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:09:58 -0400 EDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:09:58 -0400 EDT ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:09:58 -0400 EDT ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:09:58 -0400 EDT  }]
Sep  9 04:14:58.922: INFO: 
Sep  9 04:14:59.030: INFO: netserver-1[e2e-pod-network-test-3433].container[webserver].log
2020/09/09 08:11:49 Started UDP server
2020/09/09 08:12:00 GET /healthz
2020/09/09 08:12:02 GET /healthz
2020/09/09 08:12:11 GET /healthz
2020/09/09 08:12:12 GET /healthz
2020/09/09 08:12:20 GET /healthz
2020/09/09 08:12:22 GET /healthz
2020/09/09 08:12:30 GET /healthz
2020/09/09 08:12:32 GET /healthz
2020/09/09 08:12:40 GET /healthz
2020/09/09 08:12:42 GET /healthz
2020/09/09 08:12:50 GET /healthz
2020/09/09 08:12:52 GET /healthz
2020/09/09 08:13:00 GET /healthz
2020/09/09 08:13:02 GET /healthz
2020/09/09 08:13:11 GET /healthz
2020/09/09 08:13:12 GET /healthz
2020/09/09 08:13:20 GET /healthz
2020/09/09 08:13:22 GET /healthz
2020/09/09 08:13:30 GET /healthz
2020/09/09 08:13:32 GET /healthz
2020/09/09 08:13:40 GET /healthz
2020/09/09 08:13:42 GET /healthz
2020/09/09 08:13:50 GET /healthz
2020/09/09 08:13:52 GET /healthz
2020/09/09 08:14:00 GET /healthz
2020/09/09 08:14:02 GET /healthz
2020/09/09 08:14:10 GET /healthz
2020/09/09 08:14:12 GET /healthz
2020/09/09 08:14:20 GET /healthz
2020/09/09 08:14:22 GET /healthz
2020/09/09 08:14:30 GET /healthz
2020/09/09 08:14:32 GET /healthz
2020/09/09 08:14:40 GET /healthz
2020/09/09 08:14:42 GET /healthz
2020/09/09 08:14:50 GET /healthz
2020/09/09 08:14:52 GET /healthz

Sep  9 04:14:59.072: INFO: unable to fetch logs for pods: [netserver-0[e2e-pod-network-test-3433].container[webserver].error=the server rejected our request for an unknown reason (get pods netserver-0), netserver-2[e2e-pod-network-test-3433].container[webserver].error=the server rejected our request for an unknown reason (get pods netserver-2)]
Sep  9 04:14:59.091: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:14:59.091: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-pod-network-test-3433" for this suite.
Sep  9 04:14:59.153: INFO: Running AfterSuite actions on all nodes
Sep  9 04:14:59.153: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/framework/network/utils.go:705]: Unexpected error:
    <*errors.errorString | 0xc000278850>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

Stderr
[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_listing_mutating_webhooks_should_work_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 127.0s

[sig-api-machinery]_Watchers_should_observe_an_object_deletion_if_it_stops_meeting_the_requirements_of_the_selector_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 12.0s

[sig-network]_Services_should_find_a_service_from_listing_all_namespaces_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.7s

[sig-network]_DNS_should_provide_/etc/hosts_entries_for_the_cluster_[LinuxOnly]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 336.0s

[sig-api-machinery]_Garbage_collector_should_delete_RS_created_by_deployment_when_not_orphaning_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 3.1s

[sig-storage]_Secrets_should_be_consumable_from_pods_in_volume_with_defaultMode_set_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 115.0s

[sig-apps]_Daemon_set_[Serial]_should_retry_creating_failed_daemon_pods_[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 46.7s

[k8s.io]_Variable_Expansion_should_allow_substituting_values_in_a_container's_args_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 315.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/framework/util.go:715]: Unexpected error:
    <*errors.errorString | 0xc001d095b0>: {
        s: "expected pod \"var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f\" success: Gave up after waiting 5m0s for pod \"var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f\" to be \"Succeeded or Failed\"",
    }
    expected pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f" success: Gave up after waiting 5m0s for pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f" to be "Succeeded or Failed"
occurred

Stdout
I0909 04:09:24.064812  768120 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:09:24.128: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:09:24.157: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:09:24.215: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:09:24.215: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:09:24.215: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:09:24.246: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:09:24.250: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:09:24.262: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [k8s.io] Variable Expansion
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename var-expansion
Sep  9 04:09:24.677: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:09:25.002: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance] [sig-node] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
Sep  9 04:09:25.112: INFO: Waiting up to 5m0s for pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f" in namespace "e2e-var-expansion-403" to be "Succeeded or Failed"
Sep  9 04:09:25.148: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.122653ms
Sep  9 04:09:27.176: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063840888s
Sep  9 04:09:29.199: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087351465s
Sep  9 04:09:31.230: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117981659s
Sep  9 04:09:33.239: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127172286s
Sep  9 04:09:35.266: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.154099551s
Sep  9 04:09:37.294: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.182482728s
Sep  9 04:09:39.309: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.19735557s
Sep  9 04:09:41.327: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.215552329s
Sep  9 04:09:43.342: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.230444027s
Sep  9 04:09:45.358: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.245710096s
Sep  9 04:09:47.375: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.263419162s
Sep  9 04:09:49.391: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.278679984s
Sep  9 04:09:51.413: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.301586886s
Sep  9 04:09:53.428: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.316563502s
Sep  9 04:09:55.457: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.345290299s
Sep  9 04:09:57.467: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 32.355480325s
Sep  9 04:09:59.493: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.381594143s
Sep  9 04:10:01.517: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.405584986s
Sep  9 04:10:03.546: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.434566995s
Sep  9 04:10:05.579: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 40.467093716s
Sep  9 04:10:07.614: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 42.502135425s
Sep  9 04:10:09.631: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 44.519405155s
Sep  9 04:10:11.665: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 46.553137893s
Sep  9 04:10:13.678: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 48.566571802s
Sep  9 04:10:15.701: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 50.589433377s
Sep  9 04:10:17.751: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 52.63871155s
Sep  9 04:10:19.760: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 54.648102604s
Sep  9 04:10:21.776: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 56.663889875s
Sep  9 04:10:23.796: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 58.684064452s
Sep  9 04:10:25.808: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.695931056s
Sep  9 04:10:27.824: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.711636957s
Sep  9 04:10:29.840: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.727726042s
Sep  9 04:10:31.850: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.738055574s
Sep  9 04:10:33.858: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.74658937s
Sep  9 04:10:35.893: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.7806283s
Sep  9 04:10:37.904: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.791856862s
Sep  9 04:10:39.919: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.807585123s
Sep  9 04:10:41.959: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.846787168s
Sep  9 04:10:43.976: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.863806385s
Sep  9 04:10:45.988: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.876426925s
Sep  9 04:10:48.002: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.889695718s
Sep  9 04:10:50.020: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.907929778s
Sep  9 04:10:52.038: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.926049984s
Sep  9 04:10:54.051: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.939376071s
Sep  9 04:10:56.062: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.950560429s
Sep  9 04:10:58.071: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.958756687s
Sep  9 04:11:00.100: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.9877367s
Sep  9 04:11:02.110: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.997865119s
Sep  9 04:11:04.139: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m39.026660248s
Sep  9 04:11:06.162: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m41.050300002s
Sep  9 04:11:08.180: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m43.067658102s
Sep  9 04:11:10.233: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m45.121031023s
Sep  9 04:11:12.242: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m47.130065976s
Sep  9 04:11:14.259: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m49.147025884s
Sep  9 04:11:16.274: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m51.162219374s
Sep  9 04:11:18.286: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m53.174294226s
Sep  9 04:11:20.304: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m55.192592117s
Sep  9 04:11:22.316: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m57.203825696s
Sep  9 04:11:24.329: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m59.217154213s
Sep  9 04:11:26.350: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m1.238063235s
Sep  9 04:11:28.368: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m3.255671478s
Sep  9 04:11:30.382: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m5.269927178s
Sep  9 04:11:32.398: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m7.286526163s
Sep  9 04:11:34.421: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m9.30959844s
Sep  9 04:11:36.437: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m11.325435186s
Sep  9 04:11:38.462: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m13.350336182s
Sep  9 04:11:40.494: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m15.382470598s
Sep  9 04:11:42.507: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m17.39525821s
Sep  9 04:11:44.530: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m19.418053022s
Sep  9 04:11:46.587: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m21.474639673s
Sep  9 04:11:48.610: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m23.498558883s
Sep  9 04:11:50.623: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m25.511574894s
Sep  9 04:11:52.654: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m27.542036746s
Sep  9 04:11:54.672: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m29.560316348s
Sep  9 04:11:56.689: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m31.577501559s
Sep  9 04:11:58.708: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m33.595732398s
Sep  9 04:12:00.732: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m35.619725825s
Sep  9 04:12:02.749: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m37.637113418s
Sep  9 04:12:04.764: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m39.651848142s
Sep  9 04:12:06.778: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m41.665809031s
Sep  9 04:12:08.792: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m43.679716234s
Sep  9 04:12:10.808: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m45.696084395s
Sep  9 04:12:12.823: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m47.710750969s
Sep  9 04:12:14.845: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m49.732628352s
Sep  9 04:12:16.863: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m51.750975254s
Sep  9 04:12:18.873: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m53.761533906s
Sep  9 04:12:20.888: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m55.77579843s
Sep  9 04:12:22.909: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m57.796703647s
Sep  9 04:12:24.988: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m59.876470446s
Sep  9 04:12:27.011: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m1.898675561s
Sep  9 04:12:29.026: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m3.914452605s
Sep  9 04:12:31.054: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m5.941703114s
Sep  9 04:12:33.064: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m7.952232243s
Sep  9 04:12:35.078: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m9.966441326s
Sep  9 04:12:37.089: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m11.977372125s
Sep  9 04:12:39.109: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m13.996633014s
Sep  9 04:12:41.123: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.011229925s
Sep  9 04:12:43.153: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.041388825s
Sep  9 04:12:45.170: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.058092151s
Sep  9 04:12:47.188: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.075840945s
Sep  9 04:12:49.197: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.085323544s
Sep  9 04:12:51.216: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.103958147s
Sep  9 04:12:53.234: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.122517748s
Sep  9 04:12:55.252: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.140471701s
Sep  9 04:12:57.269: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.156930336s
Sep  9 04:12:59.286: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.173618033s
Sep  9 04:13:01.336: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.223950762s
Sep  9 04:13:03.362: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.250243928s
Sep  9 04:13:05.393: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.280900971s
Sep  9 04:13:07.400: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.288435501s
Sep  9 04:13:09.425: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.313439308s
Sep  9 04:13:11.441: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.329556078s
Sep  9 04:13:13.454: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.342144707s
Sep  9 04:13:15.482: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.370344872s
Sep  9 04:13:17.564: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.451863898s
Sep  9 04:13:19.589: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.476935511s
Sep  9 04:13:21.618: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.506291653s
Sep  9 04:13:23.634: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.522542699s
Sep  9 04:13:25.645: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.532717635s
Sep  9 04:13:27.662: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.550392406s
Sep  9 04:13:29.686: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.57404296s
Sep  9 04:13:31.695: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.582899539s
Sep  9 04:13:33.712: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.600231014s
Sep  9 04:13:35.728: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.616548041s
Sep  9 04:13:37.747: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.634975025s
Sep  9 04:13:39.762: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.65016709s
Sep  9 04:13:41.771: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.659543864s
Sep  9 04:13:43.781: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.668668688s
Sep  9 04:13:45.800: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.68766347s
Sep  9 04:13:47.812: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.69985631s
Sep  9 04:13:49.823: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.710944092s
Sep  9 04:13:51.833: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.72112285s
Sep  9 04:13:53.926: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.814529862s
Sep  9 04:13:55.945: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.8331675s
Sep  9 04:13:57.956: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.844506285s
Sep  9 04:13:59.970: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.858410908s
Sep  9 04:14:01.981: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.869550227s
Sep  9 04:14:04.002: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.890048483s
Sep  9 04:14:06.010: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.898298009s
Sep  9 04:14:08.031: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.919474314s
Sep  9 04:14:10.043: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.930828218s
Sep  9 04:14:12.053: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.94072396s
Sep  9 04:14:14.063: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.951451472s
Sep  9 04:14:16.083: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.971051487s
Sep  9 04:14:18.105: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.993544352s
Sep  9 04:14:20.135: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.023294945s
Sep  9 04:14:22.156: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.043839954s
Sep  9 04:14:24.175: INFO: Pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.062908274s
Sep  9 04:14:26.229: INFO: Failed to get logs from node "ostest-5xqm8-worker-0-cbbx9" pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f" container "dapi-container": the server rejected our request for an unknown reason (get pods var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f)
STEP: delete the pod
Sep  9 04:14:26.253: INFO: Waiting for pod var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f to disappear
Sep  9 04:14:26.266: INFO: Pod var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f still exists
Sep  9 04:14:28.267: INFO: Waiting for pod var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f to disappear
Sep  9 04:14:28.277: INFO: Pod var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f still exists
Sep  9 04:14:30.267: INFO: Waiting for pod var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f to disappear
Sep  9 04:14:30.278: INFO: Pod var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f still exists
Sep  9 04:14:32.267: INFO: Waiting for pod var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f to disappear
Sep  9 04:14:32.280: INFO: Pod var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f still exists
Sep  9 04:14:34.267: INFO: Waiting for pod var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f to disappear
Sep  9 04:14:34.284: INFO: Pod var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f still exists
Sep  9 04:14:36.267: INFO: Waiting for pod var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f to disappear
Sep  9 04:14:36.278: INFO: Pod var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f still exists
Sep  9 04:14:38.267: INFO: Waiting for pod var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f to disappear
Sep  9 04:14:38.276: INFO: Pod var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-var-expansion-403".
STEP: Found 9 events.
Sep  9 04:14:38.291: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f: { } Scheduled: Successfully assigned e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f to ostest-5xqm8-worker-0-cbbx9
Sep  9 04:14:38.291: INFO: At 2020-09-09 04:11:40 -0400 EDT - event for var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(a2b33079fa187c1976cce9ce485977fa79ed66a7bae3669ceea54edded6eff53): netplugin failed: "2020/09/09 08:09:25 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-var-expansion-403;K8S_POD_NAME=var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f;K8S_POD_INFRA_CONTAINER_ID=a2b33079fa187c1976cce9ce485977fa79ed66a7bae3669ceea54edded6eff53, CNI_NETNS=/var/run/netns/24218965-6cac-4e4d-8b57-bfae577fc86d).\n"
Sep  9 04:14:38.291: INFO: At 2020-09-09 04:12:04 -0400 EDT - event for var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(3ae763538d0abbb1628fc2e8294a4de3fcfc36145752823394cffe2f344db4e7): [e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:38.291: INFO: At 2020-09-09 04:12:30 -0400 EDT - event for var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(d6fde9f5c5cc3b309ed904aab0c50646ae813bc447181a6fb341d57f53bc35bc): [e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:38.291: INFO: At 2020-09-09 04:12:53 -0400 EDT - event for var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(6c71d2593e0bd3ce3d42f3522ea1abeea647ad20db399b1e8ba18e532312619f): [e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:38.291: INFO: At 2020-09-09 04:13:19 -0400 EDT - event for var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(90d9945af927afd73de71b966b7a079697d64c35051f419a065e3d381faeeb86): [e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:38.291: INFO: At 2020-09-09 04:13:41 -0400 EDT - event for var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(9b0621e99c6064b93707337280a8f8f77fa1eae6d67099beb46383b16fce2d91): [e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:38.291: INFO: At 2020-09-09 04:14:06 -0400 EDT - event for var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(579cc3452654616a1c5b07ec2fdb48e9219b6b3543774d265e607e719cc4f9fd): [e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:38.291: INFO: At 2020-09-09 04:14:30 -0400 EDT - event for var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(79931d71452c67ff8a084e02d5c0da9073cfe1ada73a18c19d2742d512789dee): [e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:14:38.312: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep  9 04:14:38.312: INFO: 
Sep  9 04:14:38.333: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:14:38.334: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-var-expansion-403" for this suite.
Sep  9 04:14:38.387: INFO: Running AfterSuite actions on all nodes
Sep  9 04:14:38.387: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/framework/util.go:715]: Unexpected error:
    <*errors.errorString | 0xc001d095b0>: {
        s: "expected pod \"var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f\" success: Gave up after waiting 5m0s for pod \"var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f\" to be \"Succeeded or Failed\"",
    }
    expected pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f" success: Gave up after waiting 5m0s for pod "var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f" to be "Succeeded or Failed"
occurred

Stderr
[k8s.io]_Probing_container_should_*not*_be_restarted_with_a_tcp:8080_liveness_probe_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 274.0s

[sig-storage]_EmptyDir_volumes_should_support_(non-root,0666,default)_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 46.8s

[sig-storage]_Projected_downwardAPI_should_provide_node_allocatable_(cpu)_as_default_cpu_limit_if_the_limit_is_not_set_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 44.4s

[k8s.io]_Container_Lifecycle_Hook_when_create_a_pod_with_lifecycle_hook_should_execute_prestop_http_hook_properly_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 52.8s

[sig-apps]_ReplicationController_should_surface_a_failure_condition_on_a_common_issue_like_exceeded_quota_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 5.8s

[k8s.io]_Probing_container_should_*not*_be_restarted_with_a_exec__cat_/tmp/health__liveness_probe_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 282.0s

[sig-apps]_StatefulSet_[k8s.io]_Basic_StatefulSet_functionality_[StatefulSetBasic]_Should_recreate_evicted_statefulset_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 157.0s

[sig-node]_ConfigMap_should_run_through_a_ConfigMap_lifecycle_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.9s

[k8s.io]_Variable_Expansion_should_allow_substituting_values_in_a_container's_command_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 66.0s

[sig-api-machinery]_Garbage_collector_should_delete_pods_created_by_rc_when_not_orphaning_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 67.0s

[sig-scheduling]_SchedulerPredicates_[Serial]_validates_that_there_is_no_conflict_between_pods_with_same_hostPort_but_different_hostIP_and_protocol_[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 57.6s

[sig-api-machinery]_Namespaces_[Serial]_should_ensure_that_all_services_are_removed_when_a_namespace_is_deleted_[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 15.8s

[sig-storage]_Downward_API_volume_should_update_labels_on_modification_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 66.0s

[sig-storage]_EmptyDir_volumes_pod_should_support_shared_volumes_between_containers_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 50.0s

[sig-instrumentation]_Events_API_should_ensure_that_an_event_can_be_fetched,_patched,_deleted,_and_listed_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.6s

[sig-network]_Services_should_provide_secure_master_service__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.1s

[sig-apps]_ReplicationController_should_release_no_longer_matching_pods_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 7.8s

[sig-apps]_ReplicationController_should_adopt_matching_pods_on_creation_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 44.2s

[k8s.io]_Docker_Containers_should_be_able_to_override_the_image's_default_command_(docker_entrypoint)_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 52.7s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_should_not_be_able_to_mutate_or_prevent_deletion_of_webhook_configuration_objects_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 34.8s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_should_mutate_pod_and_apply_defaults_after_mutation_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 35.5s

[sig-api-machinery]_CustomResourcePublishOpenAPI_[Privileged:ClusterAdmin]_works_for_CRD_without_validation_schema_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 86.0s

[sig-storage]_Downward_API_volume_should_update_annotations_on_modification_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 58.3s

[sig-storage]_EmptyDir_volumes_should_support_(root,0777,default)_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 48.8s

[sig-api-machinery]_CustomResourcePublishOpenAPI_[Privileged:ClusterAdmin]_works_for_multiple_CRDs_of_different_groups_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 211.0s

[sig-network]_Services_should_serve_a_basic_endpoint_from_pods__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 68.0s

[sig-cli]_Kubectl_client_Kubectl_run_pod_should_create_a_pod_from_an_image_when_restart_is_Never__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 15.8s

[sig-network]_Services_should_be_able_to_change_the_type_from_ExternalName_to_ClusterIP_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 194.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/network/service.go:1670]: Unexpected error:
    <*errors.errorString | 0xc001a44b70>: {
        s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
occurred

Stdout
I0909 04:06:51.521823  756249 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:06:51.590: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:06:51.737: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:06:52.116: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:06:52.116: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:06:52.116: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:06:52.229: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:06:52.281: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:06:52.400: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename services
Sep  9 04:06:53.322: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:06:53.984: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/network/service.go:731
[It] should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a service externalname-service with the type=ExternalName in namespace e2e-services-8942
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace e2e-services-8942
I0909 04:06:54.347823  756249 runners.go:190] Created replication controller with name: externalname-service, namespace: e2e-services-8942, replica count: 2
I0909 04:06:57.398564  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:00.399271  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:03.399525  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:06.399790  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:09.400011  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:12.400252  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:15.400547  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:18.400770  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:21.401030  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:24.401740  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:27.402303  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:30.402547  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:33.402875  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:36.403747  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:39.404724  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:42.405229  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:45.406583  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0909 04:07:48.407917  756249 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep  9 04:07:48.407: INFO: Creating new exec pod
Sep  9 04:07:59.539: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:02.105: INFO: rc: 1
Sep  9 04:08:02.105: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:03.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:05.633: INFO: rc: 1
Sep  9 04:08:05.633: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:06.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:08.783: INFO: rc: 1
Sep  9 04:08:08.783: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:09.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:11.598: INFO: rc: 1
Sep  9 04:08:11.598: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:12.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:14.575: INFO: rc: 1
Sep  9 04:08:14.575: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:15.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:17.613: INFO: rc: 1
Sep  9 04:08:17.613: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:18.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:20.624: INFO: rc: 1
Sep  9 04:08:20.625: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:21.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:23.601: INFO: rc: 1
Sep  9 04:08:23.601: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:24.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:26.627: INFO: rc: 1
Sep  9 04:08:26.628: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:27.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:30.117: INFO: rc: 1
Sep  9 04:08:30.117: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:31.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:33.722: INFO: rc: 1
Sep  9 04:08:33.722: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:34.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:36.819: INFO: rc: 1
Sep  9 04:08:36.819: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:37.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:39.890: INFO: rc: 1
Sep  9 04:08:39.890: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:40.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:42.628: INFO: rc: 1
Sep  9 04:08:42.628: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:43.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:46.304: INFO: rc: 1
Sep  9 04:08:46.304: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:47.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:49.838: INFO: rc: 1
Sep  9 04:08:49.838: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:50.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:53.888: INFO: rc: 1
Sep  9 04:08:53.888: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:54.107: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:56.764: INFO: rc: 1
Sep  9 04:08:56.764: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:08:57.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:08:59.609: INFO: rc: 1
Sep  9 04:08:59.609: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:00.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:02.555: INFO: rc: 1
Sep  9 04:09:02.556: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:03.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:05.675: INFO: rc: 1
Sep  9 04:09:05.675: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:06.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:08.630: INFO: rc: 1
Sep  9 04:09:08.630: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:09.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:11.613: INFO: rc: 1
Sep  9 04:09:11.613: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:12.106: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:14.573: INFO: rc: 1
Sep  9 04:09:14.573: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:15.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:17.624: INFO: rc: 1
Sep  9 04:09:17.624: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:18.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:20.598: INFO: rc: 1
Sep  9 04:09:20.598: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:21.109: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:23.578: INFO: rc: 1
Sep  9 04:09:23.578: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:24.107: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:26.655: INFO: rc: 1
Sep  9 04:09:26.655: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:27.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:29.645: INFO: rc: 1
Sep  9 04:09:29.645: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:30.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:32.585: INFO: rc: 1
Sep  9 04:09:32.585: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:33.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:35.503: INFO: rc: 1
Sep  9 04:09:35.503: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:36.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:38.602: INFO: rc: 1
Sep  9 04:09:38.602: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:39.106: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:41.556: INFO: rc: 1
Sep  9 04:09:41.556: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:42.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:44.582: INFO: rc: 1
Sep  9 04:09:44.582: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:45.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:47.566: INFO: rc: 1
Sep  9 04:09:47.566: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:48.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:50.628: INFO: rc: 1
Sep  9 04:09:50.628: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:51.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:53.615: INFO: rc: 1
Sep  9 04:09:53.615: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:54.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:56.601: INFO: rc: 1
Sep  9 04:09:56.601: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:09:57.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:09:59.540: INFO: rc: 1
Sep  9 04:09:59.540: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:10:00.105: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:10:02.615: INFO: rc: 1
Sep  9 04:10:02.615: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:10:02.615: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Sep  9 04:10:05.054: INFO: rc: 1
Sep  9 04:10:05.054: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config exec --namespace=e2e-services-8942 execpodghx5k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep  9 04:10:05.054: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-services-8942".
STEP: Found 17 events.
Sep  9 04:10:05.189: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodghx5k: { } Scheduled: Successfully assigned e2e-services-8942/execpodghx5k to ostest-5xqm8-worker-0-rzx47
Sep  9 04:10:05.189: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-jzvt7: { } Scheduled: Successfully assigned e2e-services-8942/externalname-service-jzvt7 to ostest-5xqm8-worker-0-rzx47
Sep  9 04:10:05.189: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-x6xw2: { } Scheduled: Successfully assigned e2e-services-8942/externalname-service-x6xw2 to ostest-5xqm8-worker-0-twrlr
Sep  9 04:10:05.189: INFO: At 2020-09-09 04:06:54 -0400 EDT - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-x6xw2
Sep  9 04:10:05.189: INFO: At 2020-09-09 04:06:54 -0400 EDT - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-jzvt7
Sep  9 04:10:05.189: INFO: At 2020-09-09 04:07:27 -0400 EDT - event for externalname-service-jzvt7: {multus } AddedInterface: Add eth0 [10.128.148.79/23]
Sep  9 04:10:05.189: INFO: At 2020-09-09 04:07:27 -0400 EDT - event for externalname-service-jzvt7: {kubelet ostest-5xqm8-worker-0-rzx47} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:10:05.189: INFO: At 2020-09-09 04:07:28 -0400 EDT - event for externalname-service-jzvt7: {kubelet ostest-5xqm8-worker-0-rzx47} Started: Started container externalname-service
Sep  9 04:10:05.189: INFO: At 2020-09-09 04:07:28 -0400 EDT - event for externalname-service-jzvt7: {kubelet ostest-5xqm8-worker-0-rzx47} Created: Created container externalname-service
Sep  9 04:10:05.189: INFO: At 2020-09-09 04:07:45 -0400 EDT - event for externalname-service-x6xw2: {multus } AddedInterface: Add eth0 [10.128.148.3/23]
Sep  9 04:10:05.189: INFO: At 2020-09-09 04:07:46 -0400 EDT - event for externalname-service-x6xw2: {kubelet ostest-5xqm8-worker-0-twrlr} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:10:05.189: INFO: At 2020-09-09 04:07:47 -0400 EDT - event for externalname-service-x6xw2: {kubelet ostest-5xqm8-worker-0-twrlr} Started: Started container externalname-service
Sep  9 04:10:05.189: INFO: At 2020-09-09 04:07:47 -0400 EDT - event for externalname-service-x6xw2: {kubelet ostest-5xqm8-worker-0-twrlr} Created: Created container externalname-service
Sep  9 04:10:05.189: INFO: At 2020-09-09 04:07:54 -0400 EDT - event for execpodghx5k: {multus } AddedInterface: Add eth0 [10.128.149.246/23]
Sep  9 04:10:05.189: INFO: At 2020-09-09 04:07:55 -0400 EDT - event for execpodghx5k: {kubelet ostest-5xqm8-worker-0-rzx47} Started: Started container agnhost-pause
Sep  9 04:10:05.189: INFO: At 2020-09-09 04:07:55 -0400 EDT - event for execpodghx5k: {kubelet ostest-5xqm8-worker-0-rzx47} Created: Created container agnhost-pause
Sep  9 04:10:05.189: INFO: At 2020-09-09 04:07:55 -0400 EDT - event for execpodghx5k: {kubelet ostest-5xqm8-worker-0-rzx47} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20" already present on machine
Sep  9 04:10:05.207: INFO: POD                         NODE                         PHASE    GRACE  CONDITIONS
Sep  9 04:10:05.207: INFO: execpodghx5k                ostest-5xqm8-worker-0-rzx47  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:07:48 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:07:56 -0400 EDT  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:07:56 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:07:48 -0400 EDT  }]
Sep  9 04:10:05.207: INFO: externalname-service-jzvt7  ostest-5xqm8-worker-0-rzx47  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:06:54 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:07:28 -0400 EDT  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:07:28 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:06:54 -0400 EDT  }]
Sep  9 04:10:05.207: INFO: externalname-service-x6xw2  ostest-5xqm8-worker-0-twrlr  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:06:54 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:07:47 -0400 EDT  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:07:47 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:06:54 -0400 EDT  }]
Sep  9 04:10:05.207: INFO: 
Sep  9 04:10:05.247: INFO: execpodghx5k[e2e-services-8942].container[agnhost-pause].log
Paused

Sep  9 04:10:05.276: INFO: externalname-service-jzvt7[e2e-services-8942].container[externalname-service].log
2020/09/09 08:07:28 Serving on port 9376.

Sep  9 04:10:05.325: INFO: externalname-service-x6xw2[e2e-services-8942].container[externalname-service].log
2020/09/09 08:07:47 Serving on port 9376.

Sep  9 04:10:05.357: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:10:05.357: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-services-8942" for this suite.
[AfterEach] [sig-network] Services
  @/k8s.io/kubernetes/test/e2e/network/service.go:735
Sep  9 04:10:05.423: INFO: Running AfterSuite actions on all nodes
Sep  9 04:10:05.423: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/network/service.go:1670]: Unexpected error:
    <*errors.errorString | 0xc001a44b70>: {
        s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
occurred

Stderr
[sig-auth]_ServiceAccounts_should_run_through_the_lifecycle_of_a_ServiceAccount_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 2.9s

[sig-apps]_Daemon_set_[Serial]_should_run_and_stop_simple_daemon_[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 60.0s

[k8s.io]_Container_Runtime_blackbox_test_on_terminated_container_should_report_termination_message_[LinuxOnly]_as_empty_when_pod_succeeds_and_TerminationMessagePolicy_FallbackToLogsOnError_is_set_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 48.6s

[sig-network]_DNS_should_provide_DNS_for_services__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 92.0s

[sig-storage]_ConfigMap_binary_data_should_be_reflected_in_volume_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 50.8s

[sig-storage]_Projected_downwardAPI_should_provide_container's_memory_limit_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 47.8s

[k8s.io]_Kubelet_when_scheduling_a_busybox_Pod_with_hostAliases_should_write_entries_to_/etc/hosts_[LinuxOnly]_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 61.0s

[sig-apps]_ReplicaSet_should_adopt_matching_pods_on_creation_and_release_no_longer_matching_pods_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 69.0s

[k8s.io]_KubeletManagedEtcHosts_should_test_kubelet_managed_/etc/hosts_file_[LinuxOnly]_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 77.0s

[sig-api-machinery]_CustomResourceConversionWebhook_[Privileged:ClusterAdmin]_should_be_able_to_convert_from_CR_v1_to_CR_v2_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 44.6s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_should_honor_timeout_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 50.3s

[sig-cli]_Kubectl_client_Proxy_server_should_support_--unix-socket=/path__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 2.6s

[sig-storage]_Downward_API_volume_should_provide_container's_cpu_limit_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 37.9s

[sig-api-machinery]_ResourceQuota_should_verify_ResourceQuota_with_best_effort_scope._[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 20.4s

[sig-api-machinery]_Watchers_should_receive_events_on_concurrent_watches_in_same_order_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 7.5s

[sig-storage]_Subpath_Atomic_writer_volumes_should_support_subpaths_with_downward_pod_[LinuxOnly]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 54.3s

[sig-apps]_Daemon_set_[Serial]_should_update_pod_when_spec_was_updated_and_update_strategy_is_RollingUpdate_[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 74.0s

[sig-api-machinery]_Secrets_should_fail_to_create_secret_due_to_empty_secret_key_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.2s

[sig-network]_DNS_should_provide_DNS_for_pods_for_Hostname_[LinuxOnly]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 31.4s

[sig-apps]_Daemon_set_[Serial]_should_run_and_stop_complex_daemon_[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 53.8s

[k8s.io]_Probing_container_should_be_restarted_with_a_exec__cat_/tmp/health__liveness_probe_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 72.0s

[sig-storage]_Projected_secret_should_be_consumable_from_pods_in_volume_with_defaultMode_set_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 31.5s

[sig-cli]_Kubectl_client_Kubectl_diff_should_check_if_kubectl_diff_finds_a_difference_for_Deployments_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.0s

Skipped: skip [@/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:890]: test skipped temporarily to enable 1.19 rebase to merge more quickly
skipped

Stdout
I0909 04:05:02.381599  747663 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:05:02.444: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:05:02.468: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:05:02.523: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:05:02.523: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:05:02.523: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:05:02.544: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:05:02.553: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:05:02.601: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [sig-cli] Kubectl client
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename kubectl
Sep  9 04:05:02.824: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:05:03.060: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  @/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255
[It] should check if kubectl diff finds a difference for Deployments [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [sig-cli] Kubectl client
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Sep  9 04:05:03.073: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-kubectl-2555" for this suite.
Sep  9 04:05:03.113: INFO: Running AfterSuite actions on all nodes
Sep  9 04:05:03.113: INFO: Running AfterSuite actions on node 1
skip [@/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:890]: test skipped temporarily to enable 1.19 rebase to merge more quickly

Stderr
[sig-apps]_ReplicationController_should_serve_a_basic_image_on_each_replica_with_a_public_image__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 47.5s

[sig-storage]_EmptyDir_wrapper_volumes_should_not_cause_race_condition_when_used_for_configmaps_[Serial]_[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 437.0s

[sig-api-machinery]_Watchers_should_be_able_to_restart_watching_from_the_last_resource_version_observed_by_the_previous_watch_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 2.2s

[sig-api-machinery]_CustomResourcePublishOpenAPI_[Privileged:ClusterAdmin]_updates_the_published_spec_when_one_version_gets_renamed_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 165.0s

[sig-cli]_Kubectl_client_Update_Demo_should_create_and_stop_a_replication_controller__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 59.3s

[sig-api-machinery]_ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_pod._[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 14.8s

[sig-scheduling]_SchedulerPredicates_[Serial]_validates_that_there_exists_conflict_between_pods_with_same_hostPort_and_protocol_but_one_using_0.0.0.0_hostIP_[Conformance]_[Slow]_[Suite:k8s]
e2e_tests
Time Taken: 334.0s

[sig-api-machinery]_CustomResourcePublishOpenAPI_[Privileged:ClusterAdmin]_works_for_CRD_preserving_unknown_fields_in_an_embedded_object_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 95.0s

[sig-apps]_Deployment_deployment_should_support_rollover_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 59.2s

[sig-apps]_Deployment_RollingUpdateDeployment_should_delete_old_pods_and_create_new_ones_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 53.8s

[sig-storage]_EmptyDir_volumes_volume_on_default_medium_should_have_the_correct_mode_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 86.0s

[k8s.io]_Pods_should_support_retrieving_logs_from_the_container_over_websockets_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 87.0s

[sig-apps]_ReplicaSet_should_serve_a_basic_image_on_each_replica_with_a_public_image__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 107.0s

[sig-api-machinery]_server_version_should_find_the_server_version_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.3s

[k8s.io]_Probing_container_with_readiness_probe_that_fails_should_never_be_ready_and_never_restart_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 61.0s

[sig-apps]_Deployment_deployment_should_support_proportional_scaling_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 304.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/apps/deployment.go:729]: error in waiting for pods to come up: failed to wait for pods running: [timed out waiting for the condition]
Unexpected error:
    <*errors.errorString | 0xc002579070>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
occurred

Stdout
I0909 04:01:44.161846  732231 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:01:44.219: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:01:44.260: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:01:44.344: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:01:44.344: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:01:44.344: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:01:44.365: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:01:44.372: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:01:44.393: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [sig-apps] Deployment
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename deployment
Sep  9 04:01:45.274: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:01:45.608: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  @/k8s.io/kubernetes/test/e2e/apps/deployment.go:78
[It] deployment should support proportional scaling [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Sep  9 04:01:45.635: INFO: Creating deployment "webserver-deployment"
Sep  9 04:01:45.652: INFO: Waiting for observed generation 1
Sep  9 04:01:47.694: INFO: Waiting for all required pods to come up
Sep  9 04:01:47.716: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
[AfterEach] [sig-apps] Deployment
  @/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
Sep  9 04:06:47.792: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  e2e-deployment-4302 /apis/apps/v1/namespaces/e2e-deployment-4302/deployments/webserver-deployment 2ff08db7-b6db-4149-9d27-4483b2abcc56 901743 1 2020-09-09 04:01:45 -0400 EDT <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:1] [] []  [{openshift-tests Update apps/v1 2020-09-09 04:01:45 -0400 EDT FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-09 04:02:24 -0400 EDT FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*10,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002704868 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:10,UpdatedReplicas:10,AvailableReplicas:9,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-09 04:02:23 -0400 EDT,LastTransitionTime:2020-09-09 04:02:23 -0400 EDT,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-dd94f59b7" is progressing.,LastUpdateTime:2020-09-09 04:02:24 -0400 EDT,LastTransitionTime:2020-09-09 04:01:45 -0400 EDT,},},ReadyReplicas:9,CollisionCount:nil,},}

Sep  9 04:06:47.800: INFO: New ReplicaSet "webserver-deployment-dd94f59b7" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7  e2e-deployment-4302 /apis/apps/v1/namespaces/e2e-deployment-4302/replicasets/webserver-deployment-dd94f59b7 0f421921-8a4f-4c34-a0fc-0e6db7dca599 901740 1 2020-09-09 04:01:45 -0400 EDT <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:10 deployment.kubernetes.io/max-replicas:13 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 2ff08db7-b6db-4149-9d27-4483b2abcc56 0xc002705027 0xc002705028}] []  [{kube-controller-manager Update apps/v1 2020-09-09 04:02:24 -0400 EDT FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ff08db7-b6db-4149-9d27-4483b2abcc56\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*10,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002705168 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:10,FullyLabeledReplicas:10,ObservedGeneration:1,ReadyReplicas:9,AvailableReplicas:9,Conditions:[]ReplicaSetCondition{},},}
Sep  9 04:06:47.827: INFO: Pod "webserver-deployment-dd94f59b7-48q9w" is available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-48q9w webserver-deployment-dd94f59b7- e2e-deployment-4302 /api/v1/namespaces/e2e-deployment-4302/pods/webserver-deployment-dd94f59b7-48q9w 751236e1-6811-43b7-a542-77e950c1702b 901393 0 2020-09-09 04:01:45 -0400 EDT <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.174.146"
    ],
    "mac": "fa:16:3e:4f:f3:24",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.174.146"
    ],
    "mac": "fa:16:3e:4f:f3:24",
    "default": true,
    "dns": {}
}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0f421921-8a4f-4c34-a0fc-0e6db7dca599 0xc002705657 0xc002705658}] [kuryr.openstack.org/pod-finalizer]  [{kube-controller-manager Update v1 2020-09-09 04:01:45 -0400 EDT FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f421921-8a4f-4c34-a0fc-0e6db7dca599\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {python-requests Update v1 2020-09-09 04:01:48 -0400 EDT FieldsV1 {"f:metadata":{"f:finalizers":{".":{},"v:\"kuryr.openstack.org/pod-finalizer\"":{}}}}} {multus Update v1 2020-09-09 04:02:06 -0400 EDT FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2020-09-09 04:02:07 -0400 EDT FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.174.146\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cmb92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cmb92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cmb92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ostest-5xqm8-worker-0-cbbx9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c38,c37,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-9jscz,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:45 -0400 EDT,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:07 -0400 EDT,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:07 -0400 EDT,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:45 -0400 EDT,Reason:,Message:,},},Message:,Reason:,HostIP:10.196.2.198,PodIP:10.128.174.146,StartTime:2020-09-09 04:01:45 -0400 EDT,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-09 04:02:07 -0400 EDT,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:cri-o://b22bc424f413eff019cdd7458a1981a8ba8d2d043b6dafc85fe49d49f67c8833,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.174.146,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep  9 04:06:47.827: INFO: Pod "webserver-deployment-dd94f59b7-8pm7c" is available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8pm7c webserver-deployment-dd94f59b7- e2e-deployment-4302 /api/v1/namespaces/e2e-deployment-4302/pods/webserver-deployment-dd94f59b7-8pm7c 2a34fff4-c110-4988-ac3f-39ff1a726ee4 901732 0 2020-09-09 04:01:45 -0400 EDT <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.175.87"
    ],
    "mac": "fa:16:3e:94:d4:00",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.175.87"
    ],
    "mac": "fa:16:3e:94:d4:00",
    "default": true,
    "dns": {}
}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0f421921-8a4f-4c34-a0fc-0e6db7dca599 0xc002705847 0xc002705848}] [kuryr.openstack.org/pod-finalizer]  [{kube-controller-manager Update v1 2020-09-09 04:01:45 -0400 EDT FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f421921-8a4f-4c34-a0fc-0e6db7dca599\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {python-requests Update v1 2020-09-09 04:01:48 -0400 EDT FieldsV1 {"f:metadata":{"f:finalizers":{".":{},"v:\"kuryr.openstack.org/pod-finalizer\"":{}}}}} {multus Update v1 2020-09-09 04:02:13 -0400 EDT FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2020-09-09 04:02:23 -0400 EDT FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.175.87\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cmb92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cmb92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cmb92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ostest-5xqm8-worker-0-rzx47,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c38,c37,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-9jscz,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:46 -0400 EDT,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:23 -0400 EDT,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:23 -0400 EDT,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:46 -0400 EDT,Reason:,Message:,},},Message:,Reason:,HostIP:10.196.1.181,PodIP:10.128.175.87,StartTime:2020-09-09 04:01:46 -0400 EDT,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-09 04:02:23 -0400 EDT,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:cri-o://fadc5cfe87529a4460b51ca0ab924446b8d97b48cdeb3f503883de1de18e5f3e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.175.87,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep  9 04:06:47.828: INFO: Pod "webserver-deployment-dd94f59b7-hvwv8" is available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hvwv8 webserver-deployment-dd94f59b7- e2e-deployment-4302 /api/v1/namespaces/e2e-deployment-4302/pods/webserver-deployment-dd94f59b7-hvwv8 49f735ab-8ba0-47ba-8ffe-dcb1cceed675 901435 0 2020-09-09 04:01:45 -0400 EDT <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.174.211"
    ],
    "mac": "fa:16:3e:5a:9d:8d",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.174.211"
    ],
    "mac": "fa:16:3e:5a:9d:8d",
    "default": true,
    "dns": {}
}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0f421921-8a4f-4c34-a0fc-0e6db7dca599 0xc002705a37 0xc002705a38}] [kuryr.openstack.org/pod-finalizer]  [{kube-controller-manager Update v1 2020-09-09 04:01:45 -0400 EDT FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f421921-8a4f-4c34-a0fc-0e6db7dca599\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {python-requests Update v1 2020-09-09 04:01:48 -0400 EDT FieldsV1 {"f:metadata":{"f:finalizers":{".":{},"v:\"kuryr.openstack.org/pod-finalizer\"":{}}}}} {multus Update v1 2020-09-09 04:02:09 -0400 EDT FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2020-09-09 04:02:10 -0400 EDT FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.174.211\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cmb92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cmb92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cmb92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ostest-5xqm8-worker-0-cbbx9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c38,c37,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-9jscz,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:45 -0400 EDT,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:10 -0400 EDT,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:10 -0400 EDT,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:45 -0400 EDT,Reason:,Message:,},},Message:,Reason:,HostIP:10.196.2.198,PodIP:10.128.174.211,StartTime:2020-09-09 04:01:45 -0400 EDT,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-09 04:02:10 -0400 EDT,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:cri-o://6d410f65fdcb3478c6daa600543760fbaa7af03ce2260e2409819d5623292447,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.174.211,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep  9 04:06:47.828: INFO: Pod "webserver-deployment-dd94f59b7-m4bk9" is not available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-m4bk9 webserver-deployment-dd94f59b7- e2e-deployment-4302 /api/v1/namespaces/e2e-deployment-4302/pods/webserver-deployment-dd94f59b7-m4bk9 4c3933fd-44cd-4dae-ab7e-778375226540 900976 0 2020-09-09 04:01:46 -0400 EDT <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0f421921-8a4f-4c34-a0fc-0e6db7dca599 0xc002705d57 0xc002705d58}] [kuryr.openstack.org/pod-finalizer]  [{kube-controller-manager Update v1 2020-09-09 04:01:45 -0400 EDT FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f421921-8a4f-4c34-a0fc-0e6db7dca599\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-09 04:01:46 -0400 EDT FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}} {python-requests Update v1 2020-09-09 04:01:48 -0400 EDT FieldsV1 {"f:metadata":{"f:finalizers":{".":{},"v:\"kuryr.openstack.org/pod-finalizer\"":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cmb92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cmb92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cmb92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ostest-5xqm8-worker-0-cbbx9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c38,c37,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-9jscz,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:46 -0400 EDT,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:46 -0400 EDT,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:46 -0400 EDT,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:46 -0400 EDT,Reason:,Message:,},},Message:,Reason:,HostIP:10.196.2.198,PodIP:,StartTime:2020-09-09 04:01:46 -0400 EDT,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep  9 04:06:47.828: INFO: Pod "webserver-deployment-dd94f59b7-p26k2" is available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-p26k2 webserver-deployment-dd94f59b7- e2e-deployment-4302 /api/v1/namespaces/e2e-deployment-4302/pods/webserver-deployment-dd94f59b7-p26k2 8ab4f9c2-1b3f-44f0-a7e5-57176fb9d22b 901738 0 2020-09-09 04:01:45 -0400 EDT <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.175.85"
    ],
    "mac": "fa:16:3e:fd:bf:0a",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.175.85"
    ],
    "mac": "fa:16:3e:fd:bf:0a",
    "default": true,
    "dns": {}
}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0f421921-8a4f-4c34-a0fc-0e6db7dca599 0xc002705fd7 0xc002705fd8}] [kuryr.openstack.org/pod-finalizer]  [{kube-controller-manager Update v1 2020-09-09 04:01:45 -0400 EDT FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f421921-8a4f-4c34-a0fc-0e6db7dca599\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {python-requests Update v1 2020-09-09 04:01:48 -0400 EDT FieldsV1 {"f:metadata":{"f:finalizers":{".":{},"v:\"kuryr.openstack.org/pod-finalizer\"":{}}}}} {multus Update v1 2020-09-09 04:02:09 -0400 EDT FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2020-09-09 04:02:23 -0400 EDT FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.175.85\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cmb92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cmb92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cmb92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ostest-5xqm8-worker-0-rzx47,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c38,c37,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-9jscz,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:45 -0400 EDT,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:23 -0400 EDT,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:23 -0400 EDT,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:45 -0400 EDT,Reason:,Message:,},},Message:,Reason:,HostIP:10.196.1.181,PodIP:10.128.175.85,StartTime:2020-09-09 04:01:45 -0400 EDT,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-09 04:02:23 -0400 EDT,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:cri-o://ed022d43ca3064efe4a4b8f26c9756e510fc8b6f220ff60c7206459bec0f0df9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.175.85,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep  9 04:06:47.828: INFO: Pod "webserver-deployment-dd94f59b7-p88mh" is available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-p88mh webserver-deployment-dd94f59b7- e2e-deployment-4302 /api/v1/namespaces/e2e-deployment-4302/pods/webserver-deployment-dd94f59b7-p88mh 7c001873-99ed-475a-9aed-510c791d2c9a 901711 0 2020-09-09 04:01:46 -0400 EDT <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.175.209"
    ],
    "mac": "fa:16:3e:d1:94:d4",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.175.209"
    ],
    "mac": "fa:16:3e:d1:94:d4",
    "default": true,
    "dns": {}
}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0f421921-8a4f-4c34-a0fc-0e6db7dca599 0xc001cc6207 0xc001cc6208}] [kuryr.openstack.org/pod-finalizer]  [{kube-controller-manager Update v1 2020-09-09 04:01:45 -0400 EDT FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f421921-8a4f-4c34-a0fc-0e6db7dca599\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {python-requests Update v1 2020-09-09 04:01:48 -0400 EDT FieldsV1 {"f:metadata":{"f:finalizers":{".":{},"v:\"kuryr.openstack.org/pod-finalizer\"":{}}}}} {multus Update v1 2020-09-09 04:02:07 -0400 EDT FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2020-09-09 04:02:23 -0400 EDT FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.175.209\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cmb92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cmb92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cmb92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ostest-5xqm8-worker-0-twrlr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c38,c37,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-9jscz,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:46 -0400 EDT,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:22 -0400 EDT,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:22 -0400 EDT,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:46 -0400 EDT,Reason:,Message:,},},Message:,Reason:,HostIP:10.196.3.122,PodIP:10.128.175.209,StartTime:2020-09-09 04:01:46 -0400 EDT,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-09 04:02:21 -0400 EDT,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:cri-o://740605653cf676faaf3bcd94f6da137b25e0452a4466e2a7538e0bfe374a0db9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.175.209,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep  9 04:06:47.829: INFO: Pod "webserver-deployment-dd94f59b7-pfc5z" is available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pfc5z webserver-deployment-dd94f59b7- e2e-deployment-4302 /api/v1/namespaces/e2e-deployment-4302/pods/webserver-deployment-dd94f59b7-pfc5z 65951427-5cd0-4b35-bcdd-3267aa102741 901564 0 2020-09-09 04:01:45 -0400 EDT <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.175.126"
    ],
    "mac": "fa:16:3e:5b:42:11",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.175.126"
    ],
    "mac": "fa:16:3e:5b:42:11",
    "default": true,
    "dns": {}
}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0f421921-8a4f-4c34-a0fc-0e6db7dca599 0xc001cc6417 0xc001cc6418}] [kuryr.openstack.org/pod-finalizer]  [{kube-controller-manager Update v1 2020-09-09 04:01:45 -0400 EDT FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f421921-8a4f-4c34-a0fc-0e6db7dca599\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {python-requests Update v1 2020-09-09 04:01:48 -0400 EDT FieldsV1 {"f:metadata":{"f:finalizers":{".":{},"v:\"kuryr.openstack.org/pod-finalizer\"":{}}}}} {multus Update v1 2020-09-09 04:02:13 -0400 EDT FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2020-09-09 04:02:15 -0400 EDT FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.175.126\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cmb92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cmb92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cmb92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ostest-5xqm8-worker-0-cbbx9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c38,c37,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-9jscz,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:46 -0400 EDT,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:15 -0400 EDT,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:15 -0400 EDT,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:46 -0400 EDT,Reason:,Message:,},},Message:,Reason:,HostIP:10.196.2.198,PodIP:10.128.175.126,StartTime:2020-09-09 04:01:46 -0400 EDT,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-09 04:02:14 -0400 EDT,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:cri-o://f5c31585ef8479b54c845ca03a87a1cad42e2faf211a9d7eaa690d2452d5fdc0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.175.126,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep  9 04:06:47.829: INFO: Pod "webserver-deployment-dd94f59b7-prjxj" is available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-prjxj webserver-deployment-dd94f59b7- e2e-deployment-4302 /api/v1/namespaces/e2e-deployment-4302/pods/webserver-deployment-dd94f59b7-prjxj ea0fab86-3c02-4bda-9f10-a5719681547a 901707 0 2020-09-09 04:01:45 -0400 EDT <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.174.14"
    ],
    "mac": "fa:16:3e:7b:2f:fc",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.174.14"
    ],
    "mac": "fa:16:3e:7b:2f:fc",
    "default": true,
    "dns": {}
}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0f421921-8a4f-4c34-a0fc-0e6db7dca599 0xc001cc6677 0xc001cc6678}] [kuryr.openstack.org/pod-finalizer]  [{kube-controller-manager Update v1 2020-09-09 04:01:45 -0400 EDT FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f421921-8a4f-4c34-a0fc-0e6db7dca599\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {python-requests Update v1 2020-09-09 04:01:48 -0400 EDT FieldsV1 {"f:metadata":{"f:finalizers":{".":{},"v:\"kuryr.openstack.org/pod-finalizer\"":{}}}}} {multus Update v1 2020-09-09 04:02:06 -0400 EDT FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2020-09-09 04:02:23 -0400 EDT FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.174.14\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cmb92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cmb92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cmb92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ostest-5xqm8-worker-0-twrlr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c38,c37,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-9jscz,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:46 -0400 EDT,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:22 -0400 EDT,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:22 -0400 EDT,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:46 -0400 EDT,Reason:,Message:,},},Message:,Reason:,HostIP:10.196.3.122,PodIP:10.128.174.14,StartTime:2020-09-09 04:01:46 -0400 EDT,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-09 04:02:22 -0400 EDT,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:cri-o://e79a8aebcfd6afc632c0098d226b71d97c1702d53d5ea3dbf0409e2c27d3881f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.174.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep  9 04:06:47.829: INFO: Pod "webserver-deployment-dd94f59b7-sqscr" is available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-sqscr webserver-deployment-dd94f59b7- e2e-deployment-4302 /api/v1/namespaces/e2e-deployment-4302/pods/webserver-deployment-dd94f59b7-sqscr f577a4e0-8dc4-47f3-b5e7-374a14f8c775 901735 0 2020-09-09 04:01:46 -0400 EDT <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.174.154"
    ],
    "mac": "fa:16:3e:c4:2b:09",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.174.154"
    ],
    "mac": "fa:16:3e:c4:2b:09",
    "default": true,
    "dns": {}
}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0f421921-8a4f-4c34-a0fc-0e6db7dca599 0xc001cc6887 0xc001cc6888}] [kuryr.openstack.org/pod-finalizer]  [{kube-controller-manager Update v1 2020-09-09 04:01:45 -0400 EDT FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f421921-8a4f-4c34-a0fc-0e6db7dca599\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {python-requests Update v1 2020-09-09 04:01:48 -0400 EDT FieldsV1 {"f:metadata":{"f:finalizers":{".":{},"v:\"kuryr.openstack.org/pod-finalizer\"":{}}}}} {multus Update v1 2020-09-09 04:02:09 -0400 EDT FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2020-09-09 04:02:23 -0400 EDT FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.174.154\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cmb92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cmb92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cmb92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ostest-5xqm8-worker-0-rzx47,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c38,c37,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-9jscz,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:46 -0400 EDT,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:23 -0400 EDT,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:23 -0400 EDT,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:46 -0400 EDT,Reason:,Message:,},},Message:,Reason:,HostIP:10.196.1.181,PodIP:10.128.174.154,StartTime:2020-09-09 04:01:46 -0400 EDT,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-09 04:02:23 -0400 EDT,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:cri-o://d15567f96fe4c8da53c7da9922f97310c68de3f5a508cdb63b19e499f522bb9c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.174.154,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep  9 04:06:47.830: INFO: Pod "webserver-deployment-dd94f59b7-t6zlg" is available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-t6zlg webserver-deployment-dd94f59b7- e2e-deployment-4302 /api/v1/namespaces/e2e-deployment-4302/pods/webserver-deployment-dd94f59b7-t6zlg ce13d5fb-2dbf-4fbd-8f6b-a2dd26c056cd 901701 0 2020-09-09 04:01:45 -0400 EDT <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.175.207"
    ],
    "mac": "fa:16:3e:72:dd:cd",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "",
    "interface": "eth0",
    "ips": [
        "10.128.175.207"
    ],
    "mac": "fa:16:3e:72:dd:cd",
    "default": true,
    "dns": {}
}] openshift.io/scc:anyuid] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 0f421921-8a4f-4c34-a0fc-0e6db7dca599 0xc001cc6ab7 0xc001cc6ab8}] [kuryr.openstack.org/pod-finalizer]  [{kube-controller-manager Update v1 2020-09-09 04:01:45 -0400 EDT FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f421921-8a4f-4c34-a0fc-0e6db7dca599\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {python-requests Update v1 2020-09-09 04:01:48 -0400 EDT FieldsV1 {"f:metadata":{"f:finalizers":{".":{},"v:\"kuryr.openstack.org/pod-finalizer\"":{}}}}} {multus Update v1 2020-09-09 04:02:06 -0400 EDT FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2020-09-09 04:02:22 -0400 EDT FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.175.207\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cmb92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cmb92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cmb92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ostest-5xqm8-worker-0-twrlr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c38,c37,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-9jscz,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:45 -0400 EDT,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:22 -0400 EDT,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:02:22 -0400 EDT,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-09 04:01:45 -0400 EDT,Reason:,Message:,},},Message:,Reason:,HostIP:10.196.3.122,PodIP:10.128.175.207,StartTime:2020-09-09 04:01:45 -0400 EDT,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-09 04:02:22 -0400 EDT,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:cri-o://fef37f16a5c0b86f5f1118772379915648c9e2f408e1b50e352ccbef8af76bcc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.175.207,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-deployment-4302".
STEP: Found 70 events.
Sep  9 04:06:47.848: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for webserver-deployment-dd94f59b7-48q9w: { } Scheduled: Successfully assigned e2e-deployment-4302/webserver-deployment-dd94f59b7-48q9w to ostest-5xqm8-worker-0-cbbx9
Sep  9 04:06:47.848: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for webserver-deployment-dd94f59b7-8pm7c: { } Scheduled: Successfully assigned e2e-deployment-4302/webserver-deployment-dd94f59b7-8pm7c to ostest-5xqm8-worker-0-rzx47
Sep  9 04:06:47.848: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for webserver-deployment-dd94f59b7-hvwv8: { } Scheduled: Successfully assigned e2e-deployment-4302/webserver-deployment-dd94f59b7-hvwv8 to ostest-5xqm8-worker-0-cbbx9
Sep  9 04:06:47.848: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for webserver-deployment-dd94f59b7-m4bk9: { } Scheduled: Successfully assigned e2e-deployment-4302/webserver-deployment-dd94f59b7-m4bk9 to ostest-5xqm8-worker-0-cbbx9
Sep  9 04:06:47.848: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for webserver-deployment-dd94f59b7-p26k2: { } Scheduled: Successfully assigned e2e-deployment-4302/webserver-deployment-dd94f59b7-p26k2 to ostest-5xqm8-worker-0-rzx47
Sep  9 04:06:47.848: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for webserver-deployment-dd94f59b7-p88mh: { } Scheduled: Successfully assigned e2e-deployment-4302/webserver-deployment-dd94f59b7-p88mh to ostest-5xqm8-worker-0-twrlr
Sep  9 04:06:47.848: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for webserver-deployment-dd94f59b7-pfc5z: { } Scheduled: Successfully assigned e2e-deployment-4302/webserver-deployment-dd94f59b7-pfc5z to ostest-5xqm8-worker-0-cbbx9
Sep  9 04:06:47.848: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for webserver-deployment-dd94f59b7-prjxj: { } Scheduled: Successfully assigned e2e-deployment-4302/webserver-deployment-dd94f59b7-prjxj to ostest-5xqm8-worker-0-twrlr
Sep  9 04:06:47.848: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for webserver-deployment-dd94f59b7-sqscr: { } Scheduled: Successfully assigned e2e-deployment-4302/webserver-deployment-dd94f59b7-sqscr to ostest-5xqm8-worker-0-rzx47
Sep  9 04:06:47.848: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for webserver-deployment-dd94f59b7-t6zlg: { } Scheduled: Successfully assigned e2e-deployment-4302/webserver-deployment-dd94f59b7-t6zlg to ostest-5xqm8-worker-0-twrlr
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:01:45 -0400 EDT - event for webserver-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set webserver-deployment-dd94f59b7 to 10
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:01:45 -0400 EDT - event for webserver-deployment-dd94f59b7: {replicaset-controller } SuccessfulCreate: Created pod: webserver-deployment-dd94f59b7-prjxj
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:01:45 -0400 EDT - event for webserver-deployment-dd94f59b7: {replicaset-controller } SuccessfulCreate: Created pod: webserver-deployment-dd94f59b7-48q9w
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:01:45 -0400 EDT - event for webserver-deployment-dd94f59b7: {replicaset-controller } SuccessfulCreate: Created pod: webserver-deployment-dd94f59b7-p26k2
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:01:45 -0400 EDT - event for webserver-deployment-dd94f59b7: {replicaset-controller } SuccessfulCreate: Created pod: webserver-deployment-dd94f59b7-hvwv8
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:01:45 -0400 EDT - event for webserver-deployment-dd94f59b7: {replicaset-controller } SuccessfulCreate: Created pod: webserver-deployment-dd94f59b7-8pm7c
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:01:45 -0400 EDT - event for webserver-deployment-dd94f59b7: {replicaset-controller } SuccessfulCreate: Created pod: webserver-deployment-dd94f59b7-pfc5z
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:01:45 -0400 EDT - event for webserver-deployment-dd94f59b7: {replicaset-controller } SuccessfulCreate: Created pod: webserver-deployment-dd94f59b7-t6zlg
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:01:46 -0400 EDT - event for webserver-deployment-dd94f59b7: {replicaset-controller } SuccessfulCreate: Created pod: webserver-deployment-dd94f59b7-p88mh
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:01:46 -0400 EDT - event for webserver-deployment-dd94f59b7: {replicaset-controller } SuccessfulCreate: Created pod: webserver-deployment-dd94f59b7-m4bk9
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:01:46 -0400 EDT - event for webserver-deployment-dd94f59b7: {replicaset-controller } SuccessfulCreate: (combined from similar events): Created pod: webserver-deployment-dd94f59b7-sqscr
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:06 -0400 EDT - event for webserver-deployment-dd94f59b7-48q9w: {multus } AddedInterface: Add eth0 [10.128.174.146/23]
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:06 -0400 EDT - event for webserver-deployment-dd94f59b7-48q9w: {kubelet ostest-5xqm8-worker-0-cbbx9} Created: Created container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:06 -0400 EDT - event for webserver-deployment-dd94f59b7-48q9w: {kubelet ostest-5xqm8-worker-0-cbbx9} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:06 -0400 EDT - event for webserver-deployment-dd94f59b7-prjxj: {multus } AddedInterface: Add eth0 [10.128.174.14/23]
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:06 -0400 EDT - event for webserver-deployment-dd94f59b7-t6zlg: {multus } AddedInterface: Add eth0 [10.128.175.207/23]
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:07 -0400 EDT - event for webserver-deployment-dd94f59b7-48q9w: {kubelet ostest-5xqm8-worker-0-cbbx9} Started: Started container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:07 -0400 EDT - event for webserver-deployment-dd94f59b7-p88mh: {multus } AddedInterface: Add eth0 [10.128.175.209/23]
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:07 -0400 EDT - event for webserver-deployment-dd94f59b7-prjxj: {kubelet ostest-5xqm8-worker-0-twrlr} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:07 -0400 EDT - event for webserver-deployment-dd94f59b7-t6zlg: {kubelet ostest-5xqm8-worker-0-twrlr} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:08 -0400 EDT - event for webserver-deployment-dd94f59b7-p88mh: {kubelet ostest-5xqm8-worker-0-twrlr} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:09 -0400 EDT - event for webserver-deployment-dd94f59b7-hvwv8: {multus } AddedInterface: Add eth0 [10.128.174.211/23]
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:09 -0400 EDT - event for webserver-deployment-dd94f59b7-p26k2: {kubelet ostest-5xqm8-worker-0-rzx47} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:09 -0400 EDT - event for webserver-deployment-dd94f59b7-p26k2: {multus } AddedInterface: Add eth0 [10.128.175.85/23]
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:09 -0400 EDT - event for webserver-deployment-dd94f59b7-sqscr: {multus } AddedInterface: Add eth0 [10.128.174.154/23]
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:10 -0400 EDT - event for webserver-deployment-dd94f59b7-hvwv8: {kubelet ostest-5xqm8-worker-0-cbbx9} Created: Created container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:10 -0400 EDT - event for webserver-deployment-dd94f59b7-hvwv8: {kubelet ostest-5xqm8-worker-0-cbbx9} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:10 -0400 EDT - event for webserver-deployment-dd94f59b7-hvwv8: {kubelet ostest-5xqm8-worker-0-cbbx9} Started: Started container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:10 -0400 EDT - event for webserver-deployment-dd94f59b7-sqscr: {kubelet ostest-5xqm8-worker-0-rzx47} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:13 -0400 EDT - event for webserver-deployment-dd94f59b7-8pm7c: {multus } AddedInterface: Add eth0 [10.128.175.87/23]
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:13 -0400 EDT - event for webserver-deployment-dd94f59b7-8pm7c: {kubelet ostest-5xqm8-worker-0-rzx47} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:13 -0400 EDT - event for webserver-deployment-dd94f59b7-pfc5z: {multus } AddedInterface: Add eth0 [10.128.175.126/23]
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:14 -0400 EDT - event for webserver-deployment-dd94f59b7-pfc5z: {kubelet ostest-5xqm8-worker-0-cbbx9} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:14 -0400 EDT - event for webserver-deployment-dd94f59b7-pfc5z: {kubelet ostest-5xqm8-worker-0-cbbx9} Created: Created container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:14 -0400 EDT - event for webserver-deployment-dd94f59b7-pfc5z: {kubelet ostest-5xqm8-worker-0-cbbx9} Started: Started container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:21 -0400 EDT - event for webserver-deployment-dd94f59b7-p88mh: {kubelet ostest-5xqm8-worker-0-twrlr} Pulled: Successfully pulled image "docker.io/library/httpd:2.4.38-alpine" in 12.893427168s
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:21 -0400 EDT - event for webserver-deployment-dd94f59b7-p88mh: {kubelet ostest-5xqm8-worker-0-twrlr} Started: Started container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:21 -0400 EDT - event for webserver-deployment-dd94f59b7-p88mh: {kubelet ostest-5xqm8-worker-0-twrlr} Created: Created container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:21 -0400 EDT - event for webserver-deployment-dd94f59b7-prjxj: {kubelet ostest-5xqm8-worker-0-twrlr} Pulled: Successfully pulled image "docker.io/library/httpd:2.4.38-alpine" in 13.585942044s
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:21 -0400 EDT - event for webserver-deployment-dd94f59b7-prjxj: {kubelet ostest-5xqm8-worker-0-twrlr} Created: Created container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:21 -0400 EDT - event for webserver-deployment-dd94f59b7-t6zlg: {kubelet ostest-5xqm8-worker-0-twrlr} Pulled: Successfully pulled image "docker.io/library/httpd:2.4.38-alpine" in 13.82732478s
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:22 -0400 EDT - event for webserver-deployment-dd94f59b7-8pm7c: {kubelet ostest-5xqm8-worker-0-rzx47} Pulled: Successfully pulled image "docker.io/library/httpd:2.4.38-alpine" in 9.205483485s
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:22 -0400 EDT - event for webserver-deployment-dd94f59b7-p26k2: {kubelet ostest-5xqm8-worker-0-rzx47} Pulled: Successfully pulled image "docker.io/library/httpd:2.4.38-alpine" in 13.084842318s
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:22 -0400 EDT - event for webserver-deployment-dd94f59b7-prjxj: {kubelet ostest-5xqm8-worker-0-twrlr} Started: Started container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:22 -0400 EDT - event for webserver-deployment-dd94f59b7-sqscr: {kubelet ostest-5xqm8-worker-0-rzx47} Pulled: Successfully pulled image "docker.io/library/httpd:2.4.38-alpine" in 12.806353671s
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:22 -0400 EDT - event for webserver-deployment-dd94f59b7-t6zlg: {kubelet ostest-5xqm8-worker-0-twrlr} Started: Started container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:22 -0400 EDT - event for webserver-deployment-dd94f59b7-t6zlg: {kubelet ostest-5xqm8-worker-0-twrlr} Created: Created container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:23 -0400 EDT - event for webserver-deployment-dd94f59b7-8pm7c: {kubelet ostest-5xqm8-worker-0-rzx47} Created: Created container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:23 -0400 EDT - event for webserver-deployment-dd94f59b7-8pm7c: {kubelet ostest-5xqm8-worker-0-rzx47} Started: Started container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:23 -0400 EDT - event for webserver-deployment-dd94f59b7-p26k2: {kubelet ostest-5xqm8-worker-0-rzx47} Created: Created container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:23 -0400 EDT - event for webserver-deployment-dd94f59b7-p26k2: {kubelet ostest-5xqm8-worker-0-rzx47} Started: Started container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:23 -0400 EDT - event for webserver-deployment-dd94f59b7-sqscr: {kubelet ostest-5xqm8-worker-0-rzx47} Created: Created container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:02:23 -0400 EDT - event for webserver-deployment-dd94f59b7-sqscr: {kubelet ostest-5xqm8-worker-0-rzx47} Started: Started container httpd
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:04:10 -0400 EDT - event for webserver-deployment-dd94f59b7-m4bk9: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_webserver-deployment-dd94f59b7-m4bk9_e2e-deployment-4302_4c3933fd-44cd-4dae-ab7e-778375226540_0(2b01484bcada304fd27b06e911c95faa34032facd1ec7850ce6f53b0cb41b522): netplugin failed: "2020/09/09 08:01:46 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-deployment-4302;K8S_POD_NAME=webserver-deployment-dd94f59b7-m4bk9;K8S_POD_INFRA_CONTAINER_ID=2b01484bcada304fd27b06e911c95faa34032facd1ec7850ce6f53b0cb41b522, CNI_NETNS=/var/run/netns/e1ea1a56-253f-4fc2-8565-ff7baf7be90e).\n"
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:04:31 -0400 EDT - event for webserver-deployment-dd94f59b7-m4bk9: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_webserver-deployment-dd94f59b7-m4bk9_e2e-deployment-4302_4c3933fd-44cd-4dae-ab7e-778375226540_0(a01f54c3a137b6f0187c6e2c7764b20a3bb33d1e56673b7ef9bffe3cbaf878da): [e2e-deployment-4302/webserver-deployment-dd94f59b7-m4bk9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:04:55 -0400 EDT - event for webserver-deployment-dd94f59b7-m4bk9: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_webserver-deployment-dd94f59b7-m4bk9_e2e-deployment-4302_4c3933fd-44cd-4dae-ab7e-778375226540_0(df8f00b3c3ac46f5ccc0346322d8947c1332130de1afe485ba3baf3530629d42): [e2e-deployment-4302/webserver-deployment-dd94f59b7-m4bk9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:05:21 -0400 EDT - event for webserver-deployment-dd94f59b7-m4bk9: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_webserver-deployment-dd94f59b7-m4bk9_e2e-deployment-4302_4c3933fd-44cd-4dae-ab7e-778375226540_0(7db01353d67f63fd5332ccce5dda4fa33e503d1a3302ecb4a7d72a0267cd435c): [e2e-deployment-4302/webserver-deployment-dd94f59b7-m4bk9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:05:45 -0400 EDT - event for webserver-deployment-dd94f59b7-m4bk9: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_webserver-deployment-dd94f59b7-m4bk9_e2e-deployment-4302_4c3933fd-44cd-4dae-ab7e-778375226540_0(28369e54354e9bb8aa7f8203f5660a476ec460254ac9c52ece47b0bdf57ff774): [e2e-deployment-4302/webserver-deployment-dd94f59b7-m4bk9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:06:09 -0400 EDT - event for webserver-deployment-dd94f59b7-m4bk9: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_webserver-deployment-dd94f59b7-m4bk9_e2e-deployment-4302_4c3933fd-44cd-4dae-ab7e-778375226540_0(ab4dfd7ec4a8f8971ed1e708ca74aa43fdabf9bfae8a5c32343758843dff7fd1): [e2e-deployment-4302/webserver-deployment-dd94f59b7-m4bk9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:06:47.848: INFO: At 2020-09-09 04:06:35 -0400 EDT - event for webserver-deployment-dd94f59b7-m4bk9: {kubelet ostest-5xqm8-worker-0-cbbx9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_webserver-deployment-dd94f59b7-m4bk9_e2e-deployment-4302_4c3933fd-44cd-4dae-ab7e-778375226540_0(e0882c06159cf5e437f3ab5be30997fa9cad5851de30ad2ba69afa5aa22873d1): [e2e-deployment-4302/webserver-deployment-dd94f59b7-m4bk9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:06:47.865: INFO: POD                                   NODE                         PHASE    GRACE  CONDITIONS
Sep  9 04:06:47.865: INFO: webserver-deployment-dd94f59b7-48q9w  ostest-5xqm8-worker-0-cbbx9  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:45 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:07 -0400 EDT  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:07 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:45 -0400 EDT  }]
Sep  9 04:06:47.865: INFO: webserver-deployment-dd94f59b7-8pm7c  ostest-5xqm8-worker-0-rzx47  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:46 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:23 -0400 EDT  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:23 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:46 -0400 EDT  }]
Sep  9 04:06:47.866: INFO: webserver-deployment-dd94f59b7-hvwv8  ostest-5xqm8-worker-0-cbbx9  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:45 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:10 -0400 EDT  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:10 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:45 -0400 EDT  }]
Sep  9 04:06:47.866: INFO: webserver-deployment-dd94f59b7-m4bk9  ostest-5xqm8-worker-0-cbbx9  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:46 -0400 EDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:46 -0400 EDT ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:46 -0400 EDT ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:46 -0400 EDT  }]
Sep  9 04:06:47.866: INFO: webserver-deployment-dd94f59b7-p26k2  ostest-5xqm8-worker-0-rzx47  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:45 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:23 -0400 EDT  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:23 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:45 -0400 EDT  }]
Sep  9 04:06:47.866: INFO: webserver-deployment-dd94f59b7-p88mh  ostest-5xqm8-worker-0-twrlr  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:46 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:22 -0400 EDT  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:22 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:46 -0400 EDT  }]
Sep  9 04:06:47.866: INFO: webserver-deployment-dd94f59b7-pfc5z  ostest-5xqm8-worker-0-cbbx9  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:46 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:15 -0400 EDT  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:15 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:46 -0400 EDT  }]
Sep  9 04:06:47.866: INFO: webserver-deployment-dd94f59b7-prjxj  ostest-5xqm8-worker-0-twrlr  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:46 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:22 -0400 EDT  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:22 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:46 -0400 EDT  }]
Sep  9 04:06:47.866: INFO: webserver-deployment-dd94f59b7-sqscr  ostest-5xqm8-worker-0-rzx47  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:46 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:23 -0400 EDT  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:23 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:46 -0400 EDT  }]
Sep  9 04:06:47.866: INFO: webserver-deployment-dd94f59b7-t6zlg  ostest-5xqm8-worker-0-twrlr  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:45 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:22 -0400 EDT  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:02:22 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:01:45 -0400 EDT  }]
Sep  9 04:06:47.866: INFO: 
Sep  9 04:06:47.890: INFO: webserver-deployment-dd94f59b7-48q9w[e2e-deployment-4302].container[httpd].log
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.174.146. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.174.146. Set the 'ServerName' directive globally to suppress this message
[Wed Sep 09 08:02:07.048360 2020] [mpm_event:notice] [pid 1:tid 140645896559464] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations
[Wed Sep 09 08:02:07.048443 2020] [core:notice] [pid 1:tid 140645896559464] AH00094: Command line: 'httpd -D FOREGROUND'

Sep  9 04:06:47.996: INFO: webserver-deployment-dd94f59b7-8pm7c[e2e-deployment-4302].container[httpd].log
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.175.87. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.175.87. Set the 'ServerName' directive globally to suppress this message
[Wed Sep 09 08:02:23.404919 2020] [mpm_event:notice] [pid 1:tid 139954284338024] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations
[Wed Sep 09 08:02:23.405000 2020] [core:notice] [pid 1:tid 139954284338024] AH00094: Command line: 'httpd -D FOREGROUND'

Sep  9 04:06:48.057: INFO: webserver-deployment-dd94f59b7-hvwv8[e2e-deployment-4302].container[httpd].log
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.174.211. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.174.211. Set the 'ServerName' directive globally to suppress this message
[Wed Sep 09 08:02:10.453196 2020] [mpm_event:notice] [pid 1:tid 140309517011816] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations
[Wed Sep 09 08:02:10.453279 2020] [core:notice] [pid 1:tid 140309517011816] AH00094: Command line: 'httpd -D FOREGROUND'

Sep  9 04:06:48.095: INFO: webserver-deployment-dd94f59b7-p26k2[e2e-deployment-4302].container[httpd].log
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.175.85. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.175.85. Set the 'ServerName' directive globally to suppress this message
[Wed Sep 09 08:02:23.430112 2020] [mpm_event:notice] [pid 1:tid 140116919294824] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations
[Wed Sep 09 08:02:23.430195 2020] [core:notice] [pid 1:tid 140116919294824] AH00094: Command line: 'httpd -D FOREGROUND'

Sep  9 04:06:48.184: INFO: webserver-deployment-dd94f59b7-p88mh[e2e-deployment-4302].container[httpd].log
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.175.209. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.175.209. Set the 'ServerName' directive globally to suppress this message
[Wed Sep 09 08:02:21.942585 2020] [mpm_event:notice] [pid 1:tid 139657366653800] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations
[Wed Sep 09 08:02:21.942691 2020] [core:notice] [pid 1:tid 139657366653800] AH00094: Command line: 'httpd -D FOREGROUND'

Sep  9 04:06:48.210: INFO: webserver-deployment-dd94f59b7-pfc5z[e2e-deployment-4302].container[httpd].log
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.175.126. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.175.126. Set the 'ServerName' directive globally to suppress this message
[Wed Sep 09 08:02:14.905521 2020] [mpm_event:notice] [pid 1:tid 139634152815464] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations
[Wed Sep 09 08:02:14.905643 2020] [core:notice] [pid 1:tid 139634152815464] AH00094: Command line: 'httpd -D FOREGROUND'

Sep  9 04:06:48.241: INFO: webserver-deployment-dd94f59b7-prjxj[e2e-deployment-4302].container[httpd].log
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.174.14. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.174.14. Set the 'ServerName' directive globally to suppress this message
[Wed Sep 09 08:02:22.059744 2020] [mpm_event:notice] [pid 1:tid 140368501541736] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations
[Wed Sep 09 08:02:22.059830 2020] [core:notice] [pid 1:tid 140368501541736] AH00094: Command line: 'httpd -D FOREGROUND'

Sep  9 04:06:48.262: INFO: webserver-deployment-dd94f59b7-sqscr[e2e-deployment-4302].container[httpd].log
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.174.154. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.174.154. Set the 'ServerName' directive globally to suppress this message
[Wed Sep 09 08:02:23.458572 2020] [mpm_event:notice] [pid 1:tid 140646349810536] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations
[Wed Sep 09 08:02:23.458674 2020] [core:notice] [pid 1:tid 140646349810536] AH00094: Command line: 'httpd -D FOREGROUND'

Sep  9 04:06:48.275: INFO: webserver-deployment-dd94f59b7-t6zlg[e2e-deployment-4302].container[httpd].log
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.175.207. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.175.207. Set the 'ServerName' directive globally to suppress this message
[Wed Sep 09 08:02:22.096135 2020] [mpm_event:notice] [pid 1:tid 139659164089192] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations
[Wed Sep 09 08:02:22.096243 2020] [core:notice] [pid 1:tid 139659164089192] AH00094: Command line: 'httpd -D FOREGROUND'

Sep  9 04:06:48.275: INFO: unable to fetch logs for pods: webserver-deployment-dd94f59b7-m4bk9[e2e-deployment-4302].container[httpd].error=the server rejected our request for an unknown reason (get pods webserver-deployment-dd94f59b7-m4bk9)
Sep  9 04:06:48.293: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:06:48.293: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-deployment-4302" for this suite.
Sep  9 04:06:48.346: INFO: Running AfterSuite actions on all nodes
Sep  9 04:06:48.346: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/apps/deployment.go:729]: error in waiting for pods to come up: failed to wait for pods running: [timed out waiting for the condition]
Unexpected error:
    <*errors.errorString | 0xc002579070>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
occurred

Stderr
[sig-storage]_Projected_combined_should_project_all_components_that_make_up_the_projection_API_[Projection][NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 312.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/framework/util.go:715]: Unexpected error:
    <*errors.errorString | 0xc00084dad0>: {
        s: "expected pod \"projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4\" success: Gave up after waiting 5m0s for pod \"projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4\" to be \"Succeeded or Failed\"",
    }
    expected pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4" success: Gave up after waiting 5m0s for pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4" to be "Succeeded or Failed"
occurred

Stdout
I0909 04:00:38.334833  727039 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:00:38.408: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:00:38.493: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:00:38.660: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:00:38.660: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:00:38.660: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:00:38.698: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:00:38.710: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:00:38.745: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [sig-storage] Projected combined
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename projected
Sep  9 04:00:39.108: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:00:40.086: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-projected-all-test-volume-3fda8013-0b57-4939-83b9-ac2b79b903b9
STEP: Creating secret with name secret-projected-all-test-volume-6d2d57c3-3486-46ed-9322-572118fb3375
STEP: Creating a pod to test Check all projections for projected volume plugin
Sep  9 04:00:40.350: INFO: Waiting up to 5m0s for pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4" in namespace "e2e-projected-3633" to be "Succeeded or Failed"
Sep  9 04:00:40.375: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.209606ms
Sep  9 04:00:42.388: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037413753s
Sep  9 04:00:44.405: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054248547s
Sep  9 04:00:46.431: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080352505s
Sep  9 04:00:48.437: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086447723s
Sep  9 04:00:50.467: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116252451s
Sep  9 04:00:52.571: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.220924472s
Sep  9 04:00:54.598: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.247581981s
Sep  9 04:00:56.616: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.265603931s
Sep  9 04:00:58.691: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.340410152s
Sep  9 04:01:00.735: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.38430394s
Sep  9 04:01:02.748: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.397306457s
Sep  9 04:01:04.798: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.447683631s
Sep  9 04:01:06.829: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.478164164s
Sep  9 04:01:08.841: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.490999564s
Sep  9 04:01:10.866: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.515445227s
Sep  9 04:01:12.898: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 32.547819887s
Sep  9 04:01:14.907: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.556457438s
Sep  9 04:01:16.916: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 36.565187109s
Sep  9 04:01:18.929: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 38.57868595s
Sep  9 04:01:21.008: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 40.657532508s
Sep  9 04:01:23.023: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 42.672738497s
Sep  9 04:01:25.029: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 44.678913327s
Sep  9 04:01:27.037: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 46.686372243s
Sep  9 04:01:29.044: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 48.693425686s
Sep  9 04:01:31.054: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 50.703816993s
Sep  9 04:01:33.074: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 52.72359106s
Sep  9 04:01:35.089: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 54.738713012s
Sep  9 04:01:37.097: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 56.746582169s
Sep  9 04:01:39.103: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 58.752105843s
Sep  9 04:01:41.121: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.770276904s
Sep  9 04:01:43.135: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.784294717s
Sep  9 04:01:45.287: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.936958656s
Sep  9 04:01:47.298: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.947075768s
Sep  9 04:01:49.307: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.956386575s
Sep  9 04:01:51.317: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.966883719s
Sep  9 04:01:53.325: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.974323518s
Sep  9 04:01:55.355: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.004565981s
Sep  9 04:01:57.404: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.053937909s
Sep  9 04:01:59.422: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.072006041s
Sep  9 04:02:01.454: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.103378404s
Sep  9 04:02:03.473: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.122195792s
Sep  9 04:02:05.495: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m25.144553044s
Sep  9 04:02:07.514: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m27.163176259s
Sep  9 04:02:09.542: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m29.191297259s
Sep  9 04:02:11.567: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m31.216081718s
Sep  9 04:02:13.579: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m33.228869224s
Sep  9 04:02:15.618: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m35.267764498s
Sep  9 04:02:17.654: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m37.303962286s
Sep  9 04:02:19.662: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m39.311832343s
Sep  9 04:02:21.700: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m41.349656303s
Sep  9 04:02:23.722: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m43.372005352s
Sep  9 04:02:25.734: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m45.383074288s
Sep  9 04:02:27.770: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m47.419462164s
Sep  9 04:02:29.805: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m49.454816873s
Sep  9 04:02:31.887: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m51.536989601s
Sep  9 04:02:33.956: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m53.605543622s
Sep  9 04:02:35.967: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m55.616887714s
Sep  9 04:02:37.981: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m57.630271045s
Sep  9 04:02:40.005: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m59.654361002s
Sep  9 04:02:42.026: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m1.675807916s
Sep  9 04:02:44.049: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m3.698965823s
Sep  9 04:02:46.059: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m5.708956943s
Sep  9 04:02:48.065: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m7.71439665s
Sep  9 04:02:50.097: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m9.746276042s
Sep  9 04:02:52.122: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m11.771774412s
Sep  9 04:02:54.203: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m13.852610829s
Sep  9 04:02:56.218: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m15.867727813s
Sep  9 04:02:58.233: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m17.88234826s
Sep  9 04:03:00.273: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m19.922135155s
Sep  9 04:03:02.283: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m21.93302098s
Sep  9 04:03:04.319: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m23.968558358s
Sep  9 04:03:06.332: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m25.981510824s
Sep  9 04:03:08.342: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m27.991608933s
Sep  9 04:03:10.362: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.011403432s
Sep  9 04:03:12.372: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.02197594s
Sep  9 04:03:14.383: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.032207738s
Sep  9 04:03:16.393: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.042849357s
Sep  9 04:03:18.413: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.062858324s
Sep  9 04:03:20.457: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.106885002s
Sep  9 04:03:22.482: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.131257867s
Sep  9 04:03:24.493: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.142478223s
Sep  9 04:03:26.517: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.166574376s
Sep  9 04:03:28.546: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.19569857s
Sep  9 04:03:30.569: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.218242887s
Sep  9 04:03:32.580: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.229525171s
Sep  9 04:03:34.682: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.331971574s
Sep  9 04:03:36.705: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.354121073s
Sep  9 04:03:38.712: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.361562027s
Sep  9 04:03:40.740: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.389220756s
Sep  9 04:03:42.754: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.403309348s
Sep  9 04:03:44.845: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.494694883s
Sep  9 04:03:46.877: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.52633351s
Sep  9 04:03:48.891: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.540087145s
Sep  9 04:03:50.915: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.564116554s
Sep  9 04:03:52.922: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.571721905s
Sep  9 04:03:54.968: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.617956169s
Sep  9 04:03:56.993: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.642826096s
Sep  9 04:03:59.009: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.658543053s
Sep  9 04:04:01.029: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.678450399s
Sep  9 04:04:03.041: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.690146846s
Sep  9 04:04:05.092: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.741718054s
Sep  9 04:04:07.098: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.747451046s
Sep  9 04:04:09.113: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.762358648s
Sep  9 04:04:11.131: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.780302943s
Sep  9 04:04:13.140: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.789786911s
Sep  9 04:04:15.184: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.833827106s
Sep  9 04:04:17.324: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.973194354s
Sep  9 04:04:19.346: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.995514035s
Sep  9 04:04:21.379: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m41.028198195s
Sep  9 04:04:23.401: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m43.05028923s
Sep  9 04:04:25.418: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m45.067759493s
Sep  9 04:04:27.427: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m47.076732959s
Sep  9 04:04:29.566: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m49.215676137s
Sep  9 04:04:31.634: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m51.283734295s
Sep  9 04:04:33.664: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m53.31350098s
Sep  9 04:04:35.694: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m55.343088038s
Sep  9 04:04:37.833: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m57.482227955s
Sep  9 04:04:39.846: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m59.495559234s
Sep  9 04:04:41.856: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m1.505710057s
Sep  9 04:04:43.961: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m3.610837426s
Sep  9 04:04:45.978: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m5.627761612s
Sep  9 04:04:47.987: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m7.636914316s
Sep  9 04:04:50.012: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m9.661404554s
Sep  9 04:04:52.032: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m11.681606035s
Sep  9 04:04:54.109: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m13.758950617s
Sep  9 04:04:56.128: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m15.777979761s
Sep  9 04:04:58.139: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m17.78881496s
Sep  9 04:05:00.156: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m19.806022336s
Sep  9 04:05:02.168: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m21.817588574s
Sep  9 04:05:04.182: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m23.831936649s
Sep  9 04:05:06.334: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m25.983918989s
Sep  9 04:05:08.345: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m27.994451456s
Sep  9 04:05:10.354: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.003590049s
Sep  9 04:05:12.361: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.01045562s
Sep  9 04:05:14.373: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.022716137s
Sep  9 04:05:16.387: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.036735469s
Sep  9 04:05:18.436: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.085562221s
Sep  9 04:05:20.442: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.091478121s
Sep  9 04:05:22.471: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.120355969s
Sep  9 04:05:24.510: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.159497288s
Sep  9 04:05:26.524: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.173911335s
Sep  9 04:05:28.537: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.186813245s
Sep  9 04:05:30.654: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.304024539s
Sep  9 04:05:32.672: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.321322303s
Sep  9 04:05:34.715: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.36485137s
Sep  9 04:05:36.742: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.391090734s
Sep  9 04:05:38.820: INFO: Pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.469347935s
Sep  9 04:05:41.009: INFO: Failed to get logs from node "ostest-5xqm8-worker-0-rzx47" pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4" container "projected-all-volume-test": the server rejected our request for an unknown reason (get pods projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4)
STEP: delete the pod
Sep  9 04:05:41.076: INFO: Waiting for pod projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 to disappear
Sep  9 04:05:41.109: INFO: Pod projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 still exists
Sep  9 04:05:43.109: INFO: Waiting for pod projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 to disappear
Sep  9 04:05:43.191: INFO: Pod projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 still exists
Sep  9 04:05:45.109: INFO: Waiting for pod projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 to disappear
Sep  9 04:05:45.130: INFO: Pod projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 still exists
Sep  9 04:05:47.109: INFO: Waiting for pod projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 to disappear
Sep  9 04:05:47.143: INFO: Pod projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 still exists
Sep  9 04:05:49.109: INFO: Waiting for pod projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 to disappear
Sep  9 04:05:49.222: INFO: Pod projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 no longer exists
[AfterEach] [sig-storage] Projected combined
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-projected-3633".
STEP: Found 6 events.
Sep  9 04:05:49.324: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4: { } Scheduled: Successfully assigned e2e-projected-3633/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 to ostest-5xqm8-worker-0-rzx47
Sep  9 04:05:49.324: INFO: At 2020-09-09 04:04:06 -0400 EDT - event for projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4_e2e-projected-3633_90a0d957-1c3d-4123-ad2f-e2e78096cc06_0(ab957495252baace41d284f6c92af55427dae9629fb2ac243dbb9cd8346c8800): netplugin failed: "2020/09/09 08:00:40 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-projected-3633;K8S_POD_NAME=projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4;K8S_POD_INFRA_CONTAINER_ID=ab957495252baace41d284f6c92af55427dae9629fb2ac243dbb9cd8346c8800, CNI_NETNS=/var/run/netns/f4d5efa9-0c39-4f4e-923b-3e0369c53e80).\n"
Sep  9 04:05:49.324: INFO: At 2020-09-09 04:04:32 -0400 EDT - event for projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4_e2e-projected-3633_90a0d957-1c3d-4123-ad2f-e2e78096cc06_0(2739a2149adbe96455dddc0796c2fcac9db48c9d40be55e323372262352e90f0): [e2e-projected-3633/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:05:49.324: INFO: At 2020-09-09 04:04:54 -0400 EDT - event for projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4_e2e-projected-3633_90a0d957-1c3d-4123-ad2f-e2e78096cc06_0(01725b7aab02ba0b27775da2703b9e804934cdd8ee2ff6337bf57548cb0af877): [e2e-projected-3633/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:05:49.324: INFO: At 2020-09-09 04:05:16 -0400 EDT - event for projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4_e2e-projected-3633_90a0d957-1c3d-4123-ad2f-e2e78096cc06_0(c281da968153fc520570f7116626f0135ebcf8dab947983fde8e7968b4469334): [e2e-projected-3633/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:05:49.324: INFO: At 2020-09-09 04:05:36 -0400 EDT - event for projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4_e2e-projected-3633_90a0d957-1c3d-4123-ad2f-e2e78096cc06_0(819c0db635d2db87c174e76bc727a44c3ed20002c6a35d4b21d3a5134198f8a9): [e2e-projected-3633/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep  9 04:05:49.365: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep  9 04:05:49.365: INFO: 
Sep  9 04:05:49.507: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:05:49.507: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-projected-3633" for this suite.
Sep  9 04:05:49.727: INFO: Running AfterSuite actions on all nodes
Sep  9 04:05:49.727: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/framework/util.go:715]: Unexpected error:
    <*errors.errorString | 0xc00084dad0>: {
        s: "expected pod \"projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4\" success: Gave up after waiting 5m0s for pod \"projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4\" to be \"Succeeded or Failed\"",
    }
    expected pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4" success: Gave up after waiting 5m0s for pod "projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4" to be "Succeeded or Failed"
occurred

Stderr
[k8s.io]_InitContainer_[NodeConformance]_should_not_start_app_containers_and_fail_the_pod_if_init_containers_fail_on_a_RestartNever_pod_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 115.0s

[sig-apps]_StatefulSet_[k8s.io]_Basic_StatefulSet_functionality_[StatefulSetBasic]_Scaling_should_happen_in_predictable_order_and_halt_if_any_stateful_pod_is_unhealthy_[Slow]_[Conformance]_[Suite:k8s]
e2e_tests
Time Taken: 487.0s

[sig-storage]_Projected_downwardAPI_should_provide_podname_only_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 315.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/framework/util.go:715]: Unexpected error:
    <*errors.errorString | 0xc001acef90>: {
        s: "expected pod \"downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6\" success: Gave up after waiting 5m0s for pod \"downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6\" to be \"Succeeded or Failed\"",
    }
    expected pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6" success: Gave up after waiting 5m0s for pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6" to be "Succeeded or Failed"
occurred

Stdout
I0909 04:00:23.620172  725799 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:00:23.685: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:00:23.726: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:00:23.815: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:00:23.815: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:00:23.815: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:00:24.064: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:00:24.068: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:00:24.086: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [sig-storage] Projected downwardAPI
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename projected
Sep  9 04:00:24.483: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:00:24.820: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  @/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Sep  9 04:00:24.888: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6" in namespace "e2e-projected-1934" to be "Succeeded or Failed"
Sep  9 04:00:24.902: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.851335ms
Sep  9 04:00:26.919: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031383292s
Sep  9 04:00:28.927: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03864569s
Sep  9 04:00:30.944: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05599445s
Sep  9 04:00:32.960: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071649088s
Sep  9 04:00:34.975: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.086491054s
Sep  9 04:00:36.991: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.103104207s
Sep  9 04:00:39.011: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.123101918s
Sep  9 04:00:41.047: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.159007735s
Sep  9 04:00:43.063: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.175275347s
Sep  9 04:00:45.079: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.19122816s
Sep  9 04:00:47.087: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.19927135s
Sep  9 04:00:49.095: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.20663273s
Sep  9 04:00:51.105: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 26.217162086s
Sep  9 04:00:53.129: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.241038941s
Sep  9 04:00:55.155: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.266562713s
Sep  9 04:00:57.169: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 32.281129944s
Sep  9 04:00:59.191: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 34.302506501s
Sep  9 04:01:01.204: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 36.316304473s
Sep  9 04:01:03.230: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 38.34146439s
Sep  9 04:01:05.243: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 40.355065121s
Sep  9 04:01:07.255: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 42.366945726s
Sep  9 04:01:09.269: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 44.381017073s
Sep  9 04:01:11.280: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 46.392264614s
Sep  9 04:01:13.292: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 48.40424818s
Sep  9 04:01:15.301: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 50.41248798s
Sep  9 04:01:17.311: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 52.422618778s
Sep  9 04:01:19.325: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 54.437156125s
Sep  9 04:01:21.343: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 56.454642177s
Sep  9 04:01:23.353: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 58.464794276s
Sep  9 04:01:25.359: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.471448023s
Sep  9 04:01:27.373: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.484775071s
Sep  9 04:01:29.380: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.492222229s
Sep  9 04:01:31.395: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.50666837s
Sep  9 04:01:33.425: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.537400166s
Sep  9 04:01:35.439: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.551013215s
Sep  9 04:01:37.446: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.558303884s
Sep  9 04:01:39.456: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.56820593s
Sep  9 04:01:41.479: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.59063454s
Sep  9 04:01:43.488: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.600130937s
Sep  9 04:01:45.499: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.610972904s
Sep  9 04:01:47.517: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.628578241s
Sep  9 04:01:49.540: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.652259682s
Sep  9 04:01:51.561: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.673428359s
Sep  9 04:01:53.570: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.682098035s
Sep  9 04:01:55.584: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.695781202s
Sep  9 04:01:57.596: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.70794614s
Sep  9 04:01:59.618: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.730132169s
Sep  9 04:02:01.629: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.741312483s
Sep  9 04:02:03.640: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.751509821s
Sep  9 04:02:05.661: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.772886459s
Sep  9 04:02:07.677: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.78935832s
Sep  9 04:02:09.696: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.808100232s
Sep  9 04:02:11.712: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.823890608s
Sep  9 04:02:13.784: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.895663664s
Sep  9 04:02:15.791: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.902610222s
Sep  9 04:02:17.798: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.909673583s
Sep  9 04:02:19.815: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.927238229s
Sep  9 04:02:21.825: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.936513656s
Sep  9 04:02:23.966: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m59.078288178s
Sep  9 04:02:25.985: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m1.097428561s
Sep  9 04:02:28.003: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m3.11455184s
Sep  9 04:02:30.028: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m5.139484792s
Sep  9 04:02:32.055: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m7.167401131s
Sep  9 04:02:34.070: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m9.18155485s
Sep  9 04:02:36.085: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m11.197164223s
Sep  9 04:02:38.097: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m13.209367285s
Sep  9 04:02:40.122: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m15.233916241s
Sep  9 04:02:42.138: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m17.249706807s
Sep  9 04:02:44.155: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m19.266956766s
Sep  9 04:02:46.185: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m21.297156908s
Sep  9 04:02:48.207: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m23.318514685s
Sep  9 04:02:50.225: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m25.337440998s
Sep  9 04:02:52.241: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m27.352542456s
Sep  9 04:02:54.255: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m29.366472996s
Sep  9 04:02:56.288: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m31.400219189s
Sep  9 04:02:58.312: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m33.424326583s
Sep  9 04:03:00.329: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m35.440850397s
Sep  9 04:03:02.337: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m37.44873541s
Sep  9 04:03:04.373: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m39.48518267s
Sep  9 04:03:06.393: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m41.505233833s
Sep  9 04:03:08.427: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m43.538656342s
Sep  9 04:03:10.437: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m45.548872436s
Sep  9 04:03:12.457: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m47.569108475s
Sep  9 04:03:14.491: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m49.60314558s
Sep  9 04:03:16.525: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m51.636744224s
Sep  9 04:03:18.561: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m53.672640608s
Sep  9 04:03:20.583: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m55.694949125s
Sep  9 04:03:22.594: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m57.705510405s
Sep  9 04:03:24.606: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2m59.717686003s
Sep  9 04:03:26.614: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m1.726340391s
Sep  9 04:03:28.632: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m3.744256172s
Sep  9 04:03:30.646: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m5.758026859s
Sep  9 04:03:32.671: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m7.782597882s
Sep  9 04:03:34.688: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m9.800121791s
Sep  9 04:03:36.706: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m11.818324707s
Sep  9 04:03:38.731: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m13.842722894s
Sep  9 04:03:40.745: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m15.85741346s
Sep  9 04:03:42.774: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m17.885646617s
Sep  9 04:03:44.784: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m19.896207394s
Sep  9 04:03:46.805: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m21.917082961s
Sep  9 04:03:48.817: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m23.928657119s
Sep  9 04:03:50.847: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m25.958632062s
Sep  9 04:03:52.862: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m27.974362695s
Sep  9 04:03:54.875: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m29.986467008s
Sep  9 04:03:56.883: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m31.994674341s
Sep  9 04:03:58.897: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.008963143s
Sep  9 04:04:00.914: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.026279997s
Sep  9 04:04:02.927: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.038620159s
Sep  9 04:04:04.943: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.055454769s
Sep  9 04:04:06.955: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.066985509s
Sep  9 04:04:08.967: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.078596312s
Sep  9 04:04:10.994: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.105782013s
Sep  9 04:04:13.006: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.117703709s
Sep  9 04:04:15.016: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.128340558s
Sep  9 04:04:17.095: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.207111882s
Sep  9 04:04:19.124: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.236338067s
Sep  9 04:04:21.156: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.267489458s
Sep  9 04:04:23.176: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.287583112s
Sep  9 04:04:25.203: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.314884084s
Sep  9 04:04:27.218: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.330062356s
Sep  9 04:04:29.235: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.3474377s
Sep  9 04:04:31.264: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.375814715s
Sep  9 04:04:33.287: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.399406573s
Sep  9 04:04:35.323: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.434674683s
Sep  9 04:04:37.333: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.444621606s
Sep  9 04:04:39.367: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.478840442s
Sep  9 04:04:41.376: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.487837054s
Sep  9 04:04:43.384: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.495558754s
Sep  9 04:04:45.398: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.509493989s
Sep  9 04:04:47.407: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.518649363s
Sep  9 04:04:49.425: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.536702927s
Sep  9 04:04:51.440: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.551979443s
Sep  9 04:04:53.456: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.567614555s
Sep  9 04:04:55.488: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.600295709s
Sep  9 04:04:57.503: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.614816341s
Sep  9 04:04:59.523: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.634480918s
Sep  9 04:05:01.541: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.65342262s
Sep  9 04:05:03.550: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.661739895s
Sep  9 04:05:05.563: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.674557085s
Sep  9 04:05:07.590: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.702339776s
Sep  9 04:05:09.602: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.714151495s
Sep  9 04:05:11.634: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.746086763s
Sep  9 04:05:13.645: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.757214952s
Sep  9 04:05:15.661: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.772854257s
Sep  9 04:05:17.684: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.795690179s
Sep  9 04:05:19.702: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.81350632s
Sep  9 04:05:21.732: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.84392912s
Sep  9 04:05:23.746: INFO: Pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.85753281s
Sep  9 04:05:25.815: INFO: Failed to get logs from node "ostest-5xqm8-worker-0-rzx47" pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6" container "client-container": the server rejected our request for an unknown reason (get pods downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6)
STEP: delete the pod
Sep  9 04:05:25.841: INFO: Waiting for pod downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 to disappear
Sep  9 04:05:25.854: INFO: Pod downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 still exists
Sep  9 04:05:27.854: INFO: Waiting for pod downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 to disappear
Sep  9 04:05:27.864: INFO: Pod downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 still exists
Sep  9 04:05:29.854: INFO: Waiting for pod downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 to disappear
Sep  9 04:05:29.885: INFO: Pod downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 still exists
Sep  9 04:05:31.854: INFO: Waiting for pod downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 to disappear
Sep  9 04:05:31.874: INFO: Pod downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 still exists
Sep  9 04:05:33.854: INFO: Waiting for pod downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 to disappear
Sep  9 04:05:33.868: INFO: Pod downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 still exists
Sep  9 04:05:35.854: INFO: Waiting for pod downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 to disappear
Sep  9 04:05:35.869: INFO: Pod downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 still exists
Sep  9 04:05:37.854: INFO: Waiting for pod downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 to disappear
Sep  9 04:05:37.863: INFO: Pod downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-projected-1934".
STEP: Found 7 events.
Sep  9 04:05:37.872: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6: { } Scheduled: Successfully assigned e2e-projected-1934/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 to ostest-5xqm8-worker-0-rzx47
Sep  9 04:05:37.872: INFO: At 2020-09-09 04:03:35 -0400 EDT - event for downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6_e2e-projected-1934_7eb85d6b-cea0-4d02-80ff-3b8ae3cab054_0(90307e4d3a50a03a37a09f0b74dc050aca31e595ccbab5a819bbde3dca979578): netplugin failed: "2020/09/09 08:00:25 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-projected-1934;K8S_POD_NAME=downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6;K8S_POD_INFRA_CONTAINER_ID=90307e4d3a50a03a37a09f0b74dc050aca31e595ccbab5a819bbde3dca979578, CNI_NETNS=/var/run/netns/e514f725-73a2-4439-9717-011ac6f6dad4).\n"
Sep  9 04:05:37.872: INFO: At 2020-09-09 04:03:56 -0400 EDT - event for downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6_e2e-projected-1934_7eb85d6b-cea0-4d02-80ff-3b8ae3cab054_0(4d8651bd5047504e7b65de3b48b80ec81c206553855231f89713737ff8158203): [e2e-projected-1934/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:05:37.872: INFO: At 2020-09-09 04:04:19 -0400 EDT - event for downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6_e2e-projected-1934_7eb85d6b-cea0-4d02-80ff-3b8ae3cab054_0(fd496ce5bd6da4e9e4cccd7604ae5ea4f9b9387c180399e08fe42b87a2c733ad): [e2e-projected-1934/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:05:37.872: INFO: At 2020-09-09 04:04:43 -0400 EDT - event for downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6_e2e-projected-1934_7eb85d6b-cea0-4d02-80ff-3b8ae3cab054_0(9c99e60384cde3cadabe92f877c1187d03f74b9e1dec2043ef15deaccb0b5179): [e2e-projected-1934/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:05:37.872: INFO: At 2020-09-09 04:05:06 -0400 EDT - event for downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6_e2e-projected-1934_7eb85d6b-cea0-4d02-80ff-3b8ae3cab054_0(2124f3f4300f1b54e1efefa85751d42b8d2417b24c0583c5894b6b7cd85a052d): [e2e-projected-1934/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:05:37.872: INFO: At 2020-09-09 04:05:36 -0400 EDT - event for downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6_e2e-projected-1934_7eb85d6b-cea0-4d02-80ff-3b8ae3cab054_0(5ede76f4afeb53542a7749d79b0608c163d4f4d5d95a55220b9015b48aa9248f): [e2e-projected-1934/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>

Sep  9 04:05:37.884: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep  9 04:05:37.884: INFO: 
Sep  9 04:05:37.902: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:05:37.902: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-projected-1934" for this suite.
Sep  9 04:05:37.967: INFO: Running AfterSuite actions on all nodes
Sep  9 04:05:37.967: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/framework/util.go:715]: Unexpected error:
    <*errors.errorString | 0xc001acef90>: {
        s: "expected pod \"downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6\" success: Gave up after waiting 5m0s for pod \"downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6\" to be \"Succeeded or Failed\"",
    }
    expected pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6" success: Gave up after waiting 5m0s for pod "downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6" to be "Succeeded or Failed"
occurred

Stderr
[sig-storage]_Downward_API_volume_should_provide_node_allocatable_(cpu)_as_default_cpu_limit_if_the_limit_is_not_set_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 311.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/framework/util.go:715]: Unexpected error:
    <*errors.errorString | 0xc001c23970>: {
        s: "expected pod \"downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7\" success: Gave up after waiting 5m0s for pod \"downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7\" to be \"Succeeded or Failed\"",
    }
    expected pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7" success: Gave up after waiting 5m0s for pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7" to be "Succeeded or Failed"
occurred

Stdout
I0909 04:00:18.517203  725323 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:00:18.578: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:00:18.617: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:00:18.691: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:00:18.691: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:00:18.691: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:00:18.711: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:00:18.714: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:00:18.734: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [sig-storage] Downward API volume
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename downward-api
Sep  9 04:00:19.326: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:00:20.026: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  @/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Sep  9 04:00:20.137: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7" in namespace "e2e-downward-api-379" to be "Succeeded or Failed"
Sep  9 04:00:20.148: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.608492ms
Sep  9 04:00:22.246: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10891584s
Sep  9 04:00:24.296: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158142511s
Sep  9 04:00:26.321: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183248886s
Sep  9 04:00:28.331: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.19398634s
Sep  9 04:00:30.416: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.278186339s
Sep  9 04:00:32.474: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.336784508s
Sep  9 04:00:34.492: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.354111254s
Sep  9 04:00:36.515: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.377651897s
Sep  9 04:00:38.547: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.409796518s
Sep  9 04:00:40.584: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.446889982s
Sep  9 04:00:42.596: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.458582132s
Sep  9 04:00:44.616: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.478071325s
Sep  9 04:00:46.635: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.497405722s
Sep  9 04:00:48.648: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.510829356s
Sep  9 04:00:50.759: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.621663259s
Sep  9 04:00:52.782: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.644724788s
Sep  9 04:00:54.808: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.670894651s
Sep  9 04:00:56.819: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 36.681162008s
Sep  9 04:00:58.864: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 38.726196679s
Sep  9 04:01:00.906: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 40.768464338s
Sep  9 04:01:02.912: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 42.774671574s
Sep  9 04:01:04.928: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 44.790954035s
Sep  9 04:01:06.978: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 46.841013851s
Sep  9 04:01:08.988: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 48.850557114s
Sep  9 04:01:11.006: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 50.86883732s
Sep  9 04:01:13.015: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 52.877754953s
Sep  9 04:01:15.028: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 54.890622403s
Sep  9 04:01:17.036: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 56.898933292s
Sep  9 04:01:19.058: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 58.920574549s
Sep  9 04:01:21.153: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.015414701s
Sep  9 04:01:23.170: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.032749724s
Sep  9 04:01:25.179: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.041151363s
Sep  9 04:01:27.187: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.049967953s
Sep  9 04:01:29.210: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.072320434s
Sep  9 04:01:31.219: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.081914759s
Sep  9 04:01:33.240: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.102189995s
Sep  9 04:01:35.249: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.111055822s
Sep  9 04:01:37.256: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.118406692s
Sep  9 04:01:39.264: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.126309067s
Sep  9 04:01:41.278: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.140828617s
Sep  9 04:01:43.341: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.203924533s
Sep  9 04:01:45.364: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m25.226985807s
Sep  9 04:01:47.381: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m27.243863792s
Sep  9 04:01:49.395: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m29.257465839s
Sep  9 04:01:51.405: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m31.267960663s
Sep  9 04:01:53.417: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m33.279879468s
Sep  9 04:01:55.545: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m35.407198293s
Sep  9 04:01:57.582: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m37.444883448s
Sep  9 04:01:59.621: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m39.483371239s
Sep  9 04:02:01.633: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m41.495433172s
Sep  9 04:02:03.638: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m43.500979281s
Sep  9 04:02:05.651: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m45.513393039s
Sep  9 04:02:07.690: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m47.552954352s
Sep  9 04:02:09.712: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m49.574144626s
Sep  9 04:02:11.723: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m51.585878738s
Sep  9 04:02:13.750: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m53.612937381s
Sep  9 04:02:15.769: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m55.63145608s
Sep  9 04:02:17.799: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m57.661568266s
Sep  9 04:02:19.819: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m59.681715365s
Sep  9 04:02:21.832: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m1.694620447s
Sep  9 04:02:23.889: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m3.751933967s
Sep  9 04:02:25.902: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m5.764249774s
Sep  9 04:02:27.910: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m7.77209526s
Sep  9 04:02:29.922: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m9.785004596s
Sep  9 04:02:31.950: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m11.812912466s
Sep  9 04:02:33.980: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m13.843041049s
Sep  9 04:02:36.000: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m15.862273537s
Sep  9 04:02:38.017: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m17.879718372s
Sep  9 04:02:40.054: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m19.916943585s
Sep  9 04:02:42.066: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m21.928613048s
Sep  9 04:02:44.104: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m23.966056459s
Sep  9 04:02:46.128: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m25.990607013s
Sep  9 04:02:48.149: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.011665293s
Sep  9 04:02:50.199: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.061454457s
Sep  9 04:02:52.219: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.08184198s
Sep  9 04:02:54.243: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.106005332s
Sep  9 04:02:56.258: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.120389227s
Sep  9 04:02:58.271: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.133345796s
Sep  9 04:03:00.311: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.173189763s
Sep  9 04:03:02.320: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.182577448s
Sep  9 04:03:04.336: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.198079048s
Sep  9 04:03:06.341: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.203906063s
Sep  9 04:03:08.355: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.217681686s
Sep  9 04:03:10.365: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.227758837s
Sep  9 04:03:12.373: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.235210996s
Sep  9 04:03:14.385: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.247384384s
Sep  9 04:03:16.393: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.256040857s
Sep  9 04:03:18.424: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.286709996s
Sep  9 04:03:20.440: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.302063185s
Sep  9 04:03:22.461: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.323260401s
Sep  9 04:03:24.478: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.341023634s
Sep  9 04:03:26.499: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.361505137s
Sep  9 04:03:28.543: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.406001696s
Sep  9 04:03:30.574: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.437016163s
Sep  9 04:03:32.583: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.44547185s
Sep  9 04:03:34.741: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.603988881s
Sep  9 04:03:36.762: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.62500991s
Sep  9 04:03:38.772: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.634494461s
Sep  9 04:03:40.810: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.672873861s
Sep  9 04:03:42.824: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.686220364s
Sep  9 04:03:44.877: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.739941515s
Sep  9 04:03:46.908: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.770881884s
Sep  9 04:03:48.918: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.780825973s
Sep  9 04:03:50.935: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.797746905s
Sep  9 04:03:52.948: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.810830459s
Sep  9 04:03:54.976: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.838717162s
Sep  9 04:03:56.986: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.848861609s
Sep  9 04:03:58.997: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.859451995s
Sep  9 04:04:01.032: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.895032652s
Sep  9 04:04:03.041: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.903950808s
Sep  9 04:04:05.089: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.952014382s
Sep  9 04:04:07.100: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.962184014s
Sep  9 04:04:09.123: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.985457705s
Sep  9 04:04:11.150: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m51.012456349s
Sep  9 04:04:13.159: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m53.021091406s
Sep  9 04:04:15.229: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m55.091896162s
Sep  9 04:04:17.313: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m57.175195626s
Sep  9 04:04:19.339: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3m59.201769522s
Sep  9 04:04:21.368: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m1.231043546s
Sep  9 04:04:23.384: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m3.246176953s
Sep  9 04:04:25.402: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m5.264742383s
Sep  9 04:04:27.430: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m7.292267108s
Sep  9 04:04:29.566: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m9.428685268s
Sep  9 04:04:31.635: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m11.497252212s
Sep  9 04:04:33.654: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m13.516400491s
Sep  9 04:04:35.662: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m15.524923768s
Sep  9 04:04:37.866: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m17.728245904s
Sep  9 04:04:39.927: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m19.78999617s
Sep  9 04:04:41.944: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m21.806734241s
Sep  9 04:04:43.985: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m23.848049981s
Sep  9 04:04:46.024: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m25.88700549s
Sep  9 04:04:48.032: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m27.894362256s
Sep  9 04:04:50.075: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m29.937250569s
Sep  9 04:04:52.085: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m31.947850588s
Sep  9 04:04:54.127: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m33.989932705s
Sep  9 04:04:56.142: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.004858333s
Sep  9 04:04:58.148: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.01049711s
Sep  9 04:05:00.163: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.025206061s
Sep  9 04:05:02.172: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.034942932s
Sep  9 04:05:04.200: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.06206392s
Sep  9 04:05:06.334: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.196661923s
Sep  9 04:05:08.350: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.212879121s
Sep  9 04:05:10.371: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.233976097s
Sep  9 04:05:12.378: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.240304689s
Sep  9 04:05:14.386: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.248513159s
Sep  9 04:05:16.392: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.254684644s
Sep  9 04:05:18.432: INFO: Pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.294593575s
Sep  9 04:05:20.540: INFO: Failed to get logs from node "ostest-5xqm8-worker-0-rzx47" pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7" container "client-container": the server rejected our request for an unknown reason (get pods downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7)
STEP: delete the pod
Sep  9 04:05:20.587: INFO: Waiting for pod downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 to disappear
Sep  9 04:05:20.600: INFO: Pod downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 still exists
Sep  9 04:05:22.600: INFO: Waiting for pod downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 to disappear
Sep  9 04:05:22.609: INFO: Pod downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 still exists
Sep  9 04:05:24.600: INFO: Waiting for pod downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 to disappear
Sep  9 04:05:24.630: INFO: Pod downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 still exists
Sep  9 04:05:26.600: INFO: Waiting for pod downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 to disappear
Sep  9 04:05:26.635: INFO: Pod downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 still exists
Sep  9 04:05:28.600: INFO: Waiting for pod downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 to disappear
Sep  9 04:05:28.609: INFO: Pod downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-downward-api-379".
STEP: Found 9 events.
Sep  9 04:05:28.630: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7: { } Scheduled: Successfully assigned e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 to ostest-5xqm8-worker-0-rzx47
Sep  9 04:05:28.630: INFO: At 2020-09-09 04:02:25 -0400 EDT - event for downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(2fa31363bc30ea41f1ffff96f6539e716b9153e72aeec69875efb6d10584d701): netplugin failed: "2020/09/09 08:00:20 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-downward-api-379;K8S_POD_NAME=downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7;K8S_POD_INFRA_CONTAINER_ID=2fa31363bc30ea41f1ffff96f6539e716b9153e72aeec69875efb6d10584d701, CNI_NETNS=/var/run/netns/de8ef08f-5801-49d9-909b-88cb1e5c9f07).\n"
Sep  9 04:05:28.630: INFO: At 2020-09-09 04:02:47 -0400 EDT - event for downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(516e94edb4e0ab465c32730f76aa604513a3c3b7107af42176e43977268c62bd): [e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:05:28.630: INFO: At 2020-09-09 04:03:10 -0400 EDT - event for downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(5771b09ffeac1c0c2405ffb1aa484cf4612525a0ab7d4efd064c578896593a64): [e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:05:28.630: INFO: At 2020-09-09 04:03:32 -0400 EDT - event for downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(6bc921892cdc30b9bf336e00f5f5ac2c405ac1825bb8e98825779324e0234b20): [e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:05:28.630: INFO: At 2020-09-09 04:03:56 -0400 EDT - event for downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(22bd832859282a8da699bf99f2581ed77f2e4b02742d1cee37386ed1bd4ccf9a): [e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:05:28.630: INFO: At 2020-09-09 04:04:18 -0400 EDT - event for downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(d3d506594437ca4ecb5df5dbc00a732221db23f54a897a86c79de0f142a82dd3): [e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:05:28.630: INFO: At 2020-09-09 04:04:41 -0400 EDT - event for downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(ea0276663369634a63daf642f46c387e2eaed45000a958a3347dabf39d211625): [e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:05:28.630: INFO: At 2020-09-09 04:05:05 -0400 EDT - event for downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7: {kubelet ostest-5xqm8-worker-0-rzx47} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(3312d8ef1c4439eca0a07022c484292d221c9f55ffcc14c93a064ed0f74c43fa): [e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep  9 04:05:28.662: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep  9 04:05:28.662: INFO: 
Sep  9 04:05:28.695: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:05:28.695: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-downward-api-379" for this suite.
Sep  9 04:05:28.741: INFO: Running AfterSuite actions on all nodes
Sep  9 04:05:28.741: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/framework/util.go:715]: Unexpected error:
    <*errors.errorString | 0xc001c23970>: {
        s: "expected pod \"downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7\" success: Gave up after waiting 5m0s for pod \"downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7\" to be \"Succeeded or Failed\"",
    }
    expected pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7" success: Gave up after waiting 5m0s for pod "downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7" to be "Succeeded or Failed"
occurred

Stderr
[sig-storage]_Projected_secret_optional_updates_should_be_reflected_in_volume_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 203.0s

[sig-storage]_Projected_downwardAPI_should_provide_node_allocatable_(memory)_as_default_memory_limit_if_the_limit_is_not_set_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 99.0s

[sig-node]_ConfigMap_should_be_consumable_via_the_environment_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 104.0s

[sig-storage]_ConfigMap_should_be_consumable_from_pods_in_volume_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 37.7s

[sig-scheduling]_SchedulerPredicates_[Serial]_validates_that_NodeSelector_is_respected_if_matching__[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 33.6s

[sig-api-machinery]_ResourceQuota_should_create_a_ResourceQuota_and_ensure_its_status_is_promptly_calculated._[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 9.0s

[sig-api-machinery]_CustomResourcePublishOpenAPI_[Privileged:ClusterAdmin]_works_for_multiple_CRDs_of_same_group_and_version_but_different_kinds_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 188.0s

[sig-storage]_EmptyDir_volumes_should_support_(root,0666,default)_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 43.9s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_patching/updating_a_validating_webhook_should_work_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 47.3s

[sig-api-machinery]_ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_replication_controller._[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 12.4s

[sig-api-machinery]_CustomResourceConversionWebhook_[Privileged:ClusterAdmin]_should_be_able_to_convert_a_non_homogeneous_list_of_CRs_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 47.9s

[sig-node]_Downward_API_should_provide_container's_limits.cpu/memory_and_requests.cpu/memory_as_env_vars_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 57.3s

[sig-apps]_Job_should_adopt_matching_orphans_and_release_non-matching_pods_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 50.8s

[sig-apps]_StatefulSet_[k8s.io]_Basic_StatefulSet_functionality_[StatefulSetBasic]_should_have_a_working_scale_subresource_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 82.0s

[k8s.io]_Kubelet_when_scheduling_a_busybox_command_in_a_pod_should_print_the_output_to_logs_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 57.4s

[k8s.io]_Probing_container_should_*not*_be_restarted_with_a_/healthz_http_liveness_probe_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 301.0s

[sig-storage]_Projected_secret_should_be_consumable_from_pods_in_volume_with_mappings_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 50.3s

[sig-api-machinery]_Secrets_should_patch_a_secret_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.9s

[k8s.io]_Security_Context_when_creating_containers_with_AllowPrivilegeEscalation_should_not_allow_privilege_escalation_when_false_[LinuxOnly]_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 42.7s

[sig-storage]_EmptyDir_volumes_should_support_(root,0644,tmpfs)_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 48.2s

[sig-cli]_Kubectl_client_Kubectl_logs_should_be_able_to_retrieve_and_filter_logs__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 48.0s

[sig-storage]_Subpath_Atomic_writer_volumes_should_support_subpaths_with_configmap_pod_[LinuxOnly]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 55.8s

[sig-storage]_Downward_API_volume_should_provide_container's_memory_limit_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 37.9s

[sig-storage]_Projected_downwardAPI_should_update_annotations_on_modification_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 36.2s

[k8s.io]_Container_Lifecycle_Hook_when_create_a_pod_with_lifecycle_hook_should_execute_prestop_exec_hook_properly_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 92.0s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_should_include_webhook_resources_in_discovery_documents_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 56.2s

[sig-api-machinery]_Events_should_delete_a_collection_of_events_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 2.1s

[k8s.io]_InitContainer_[NodeConformance]_should_not_start_app_containers_if_init_containers_fail_on_a_RestartAlways_pod_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 98.0s

[sig-api-machinery]_Aggregator_Should_be_able_to_support_the_1.17_Sample_API_Server_using_the_current_Aggregator_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 72.0s

[sig-node]_ConfigMap_should_fail_to_create_ConfigMap_with_empty_key_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.9s

[sig-instrumentation]_Events_API_should_delete_a_collection_of_events_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 2.6s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_should_mutate_configmap_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 54.5s

[sig-api-machinery]_CustomResourceDefinition_resources_[Privileged:ClusterAdmin]_should_include_custom_resource_definition_resources_in_discovery_documents_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.2s

[k8s.io]_Container_Runtime_blackbox_test_on_terminated_container_should_report_termination_message_[LinuxOnly]_from_log_output_if_TerminationMessagePolicy_FallbackToLogsOnError_is_set_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 31.0s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_should_be_able_to_deny_pod_and_configmap_creation_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 43.6s

[sig-cli]_Kubectl_client_Kubectl_cluster-info_should_check_if_Kubernetes_master_services_is_included_in_cluster-info__[Conformance]_[Disabled:SpecialConfig]_[Suite:k8s]
e2e_tests
Time Taken: 1.3s

[sig-cli]_Kubectl_client_Kubectl_version_should_check_is_all_data_is_printed__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.4s

[sig-storage]_Subpath_Atomic_writer_volumes_should_support_subpaths_with_projected_pod_[LinuxOnly]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 73.0s

[sig-storage]_ConfigMap_should_be_consumable_from_pods_in_volume_with_mappings_as_non-root_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 56.1s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_should_mutate_custom_resource_with_pruning_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 46.8s

[sig-network]_Proxy_version_v1_should_proxy_through_a_service_and_a_pod__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 85.0s

[k8s.io]_Container_Runtime_blackbox_test_on_terminated_container_should_report_termination_message_[LinuxOnly]_if_TerminationMessagePath_is_set_as_non-root_user_and_at_a_non-default_path_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 29.9s

[sig-storage]_Projected_downwardAPI_should_set_mode_on_item_file_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 37.4s

[sig-storage]_EmptyDir_volumes_should_support_(non-root,0777,tmpfs)_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 33.4s

[sig-network]_DNS_should_provide_DNS_for_ExternalName_services_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 99.0s

[sig-network]_Services_should_be_able_to_change_the_type_from_ClusterIP_to_ExternalName_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 138.0s

[sig-storage]_Secrets_should_be_consumable_in_multiple_volumes_in_a_pod_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 34.3s

[sig-storage]_Projected_secret_should_be_consumable_from_pods_in_volume_with_mappings_and_Item_Mode_set_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 39.7s

[sig-api-machinery]_Garbage_collector_should_keep_the_rc_around_until_all_its_pods_are_deleted_if_the_deleteOptions_says_so_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 52.2s

[sig-storage]_Secrets_should_be_consumable_from_pods_in_volume_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 144.0s

[sig-storage]_Subpath_Atomic_writer_volumes_should_support_subpaths_with_secret_pod_[LinuxOnly]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 66.0s

[k8s.io]_Pods_should_support_remote_command_execution_over_websockets_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 39.4s

[sig-node]_PodTemplates_should_delete_a_collection_of_pod_templates_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.8s

[sig-storage]_Projected_configMap_should_be_consumable_from_pods_in_volume_with_defaultMode_set_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 52.0s

[sig-api-machinery]_ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_service._[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 12.6s

[sig-storage]_Secrets_should_be_able_to_mount_in_a_volume_regardless_of_a_different_secret_existing_with_same_name_in_different_namespace_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 168.0s

[sig-cli]_Kubectl_client_Kubectl_describe_should_check_if_kubectl_describe_prints_relevant_information_for_rc_and_pods__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 170.0s

[k8s.io]_[sig-node]_NoExecuteTaintManager_Multiple_Pods_[Serial]_evicts_pods_with_minTolerationSeconds_[Disruptive]_[Conformance]_[Suite:k8s]
e2e_tests
Time Taken: 109.0s

[sig-api-machinery]_Servers_with_support_for_Table_transformation_should_return_a_406_for_a_backend_which_does_not_implement_metadata_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.3s

[sig-storage]_Projected_configMap_should_be_consumable_from_pods_in_volume_as_non-root_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 41.7s

[k8s.io]_Pods_should_get_a_host_IP_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 43.1s

[sig-storage]_ConfigMap_should_be_consumable_in_multiple_volumes_in_the_same_pod_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 47.0s

[sig-storage]_Projected_configMap_optional_updates_should_be_reflected_in_volume_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 37.5s

[sig-node]_PodTemplates_should_run_the_lifecycle_of_PodTemplates_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 1.1s

[sig-network]_DNS_should_support_configurable_pod_DNS_nameservers_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 45.7s

[sig-storage]_Projected_configMap_should_be_consumable_from_pods_in_volume_with_mappings_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 44.8s

[sig-cli]_Kubectl_client_Proxy_server_should_support_proxy_with_--port_0__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 2.3s

[sig-storage]_Secrets_should_be_consumable_from_pods_in_volume_with_mappings_and_Item_Mode_set_[LinuxOnly]_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 63.0s

[sig-api-machinery]_CustomResourceDefinition_Watch_[Privileged:ClusterAdmin]_CustomResourceDefinition_Watch_watch_on_custom_resource_definition_objects_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 64.0s

[k8s.io]_Variable_Expansion_should_succeed_in_writing_subpaths_in_container_[sig-storage][Slow]_[Conformance]_[Suite:k8s]
e2e_tests
Time Taken: 79.0s

[sig-scheduling]_SchedulerPreemption_[Serial]_PreemptionExecutionPath_runs_ReplicaSets_to_verify_preemption_running_path_[Conformance]_[Suite:openshift/conformance/serial/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 123.0s

Failed:
fail [@/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:864]: Unexpected error:
    <*errors.errorString | 0xc00024a860>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

Stdout
I0909 04:35:02.784587  888977 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep  9 04:35:02.849: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep  9 04:35:02.885: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  9 04:35:02.940: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  9 04:35:02.940: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Sep  9 04:35:02.940: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep  9 04:35:02.955: INFO: e2e test version: v0.0.0-master+$Format:%h$
Sep  9 04:35:02.958: INFO: kube-apiserver version: v1.19.0-rc.2+068702d
Sep  9 04:35:02.973: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/framework.go:1425
[BeforeEach] [Top Level]
  github.com/openshift/origin@/test/extended/util/test.go:59
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename sched-preemption
Sep  9 04:35:03.208: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Sep  9 04:35:03.500: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
  @/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89
Sep  9 04:35:03.554: INFO: Waiting up to 1m0s for all nodes to be ready
Sep  9 04:36:03.985: INFO: Waiting for terminating namespaces to be deleted...
[BeforeEach] PreemptionExecutionPath
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename sched-preemption-path
Sep  9 04:36:04.221: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] PreemptionExecutionPath
  @/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487
STEP: Finding an available node
STEP: Trying to launch a pod without a label to get a node which can launch it.
[AfterEach] PreemptionExecutionPath
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-sched-preemption-path-3057".
STEP: Found 1 events.
Sep  9 04:37:04.701: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for without-label: { } Scheduled: Successfully assigned e2e-sched-preemption-path-3057/without-label to ostest-5xqm8-worker-0-cbbx9
Sep  9 04:37:04.723: INFO: POD            NODE                         PHASE    GRACE  CONDITIONS
Sep  9 04:37:04.723: INFO: without-label  ostest-5xqm8-worker-0-cbbx9  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:36:04 -0400 EDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:36:04 -0400 EDT ContainersNotReady containers with unready status: [without-label]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:36:04 -0400 EDT ContainersNotReady containers with unready status: [without-label]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 04:36:04 -0400 EDT  }]
Sep  9 04:37:04.723: INFO: 
Sep  9 04:37:04.763: INFO: unable to fetch logs for pods: without-label[e2e-sched-preemption-path-3057].container[without-label].error=the server rejected our request for an unknown reason (get pods without-label)
Sep  9 04:37:04.794: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:37:04.794: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-sched-preemption-path-3057" for this suite.
[AfterEach] PreemptionExecutionPath
  @/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461
Sep  9 04:37:04.863: INFO: List existing priorities:
Sep  9 04:37:04.863: INFO: sched-preemption-high-priority/1000 created at 2020-09-09 04:35:03 -0400 EDT
Sep  9 04:37:04.863: INFO: sched-preemption-low-priority/1 created at 2020-09-09 04:35:03 -0400 EDT
Sep  9 04:37:04.863: INFO: sched-preemption-medium-priority/100 created at 2020-09-09 04:35:03 -0400 EDT
Sep  9 04:37:04.863: INFO: system-cluster-critical/2000000000 created at 2020-09-08 11:48:25 -0400 EDT
Sep  9 04:37:04.863: INFO: system-node-critical/2000001000 created at 2020-09-08 11:48:25 -0400 EDT
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  @/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "e2e-sched-preemption-3536".
STEP: Found 0 events.
Sep  9 04:37:04.907: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep  9 04:37:04.907: INFO: 
Sep  9 04:37:04.924: INFO: skipping dumping cluster info - cluster too large
Sep  9 04:37:04.924: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-sched-preemption-3536" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  @/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77
Sep  9 04:37:05.100: INFO: Running AfterSuite actions on all nodes
Sep  9 04:37:05.100: INFO: Running AfterSuite actions on node 1
fail [@/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:864]: Unexpected error:
    <*errors.errorString | 0xc00024a860>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

Stderr
[k8s.io]_Pods_should_be_updated_[NodeConformance]_[Conformance]_[sig-node]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 45.0s

[sig-api-machinery]_AdmissionWebhook_[Privileged:ClusterAdmin]_should_mutate_custom_resource_with_different_stored_version_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 54.3s

[sig-storage]_Downward_API_volume_should_provide_container's_memory_request_[NodeConformance]_[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 57.7s

[sig-api-machinery]_CustomResourceDefinition_resources_[Privileged:ClusterAdmin]_Simple_CustomResourceDefinition_getting/updating/patching_custom_resource_definition_status_sub-resource_works__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 5.1s

[sig-cli]_Kubectl_client_Kubectl_label_should_update_the_label_on_a_resource__[Conformance]_[Suite:openshift/conformance/parallel/minimal]_[Suite:k8s]
e2e_tests
Time Taken: 38.7s

[sig-arch]_Monitor_cluster_while_tests_execute
e2e_tests
Time Taken: 4337.0s

Failed:
298 error level events were detected during this test run:

Sep 09 07:55:14.485 E ns/e2e-webhook-5176 pod/sample-webhook-deployment-7bc8486f8c-g7jrq node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook container exited with code 2 (Error): 
Sep 09 07:55:33.556 E ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 container/dapi-container init container exited with code 137 (Error): 
Sep 09 07:55:33.556 E ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 07:55:33.556 E ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 container/dapi-container container exited with code 137 (Error): 
Sep 09 07:57:47.396 E ns/e2e-webhook-2955 pod/sample-webhook-deployment-7bc8486f8c-ps8th node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook container exited with code 2 (Error): 
Sep 09 07:57:47.445 E ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/querier container exited with code 137 (Error): 
Sep 09 07:57:47.445 E ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier container exited with code 137 (Error): 
Sep 09 07:57:47.445 E ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/webserver container exited with code 2 (Error): 
Sep 09 07:58:13.569 E ns/e2e-pod-network-test-9567 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver container exited with code 2 (Error): 
Sep 09 07:58:20.252 E ns/e2e-services-1626 pod/externalsvc-5px76 node/ostest-5xqm8-worker-0-twrlr container/externalsvc container exited with code 137 (Error): 
Sep 09 07:58:20.368 E ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 container/externalsvc container exited with code 137 (Error): 
Sep 09 07:58:24.597 E ns/e2e-container-runtime-5729 pod/termination-message-containereeedcce5-c847-47c1-8fd1-5c73aeb18ad3 node/ostest-5xqm8-worker-0-rzx47 container/termination-message-container init container exited with code 1 (Error): DONE
Sep 09 07:58:24.597 E ns/e2e-container-runtime-5729 pod/termination-message-containereeedcce5-c847-47c1-8fd1-5c73aeb18ad3 node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 07:58:24.597 E ns/e2e-container-runtime-5729 pod/termination-message-containereeedcce5-c847-47c1-8fd1-5c73aeb18ad3 node/ostest-5xqm8-worker-0-rzx47 container/termination-message-container container exited with code 1 (Error): DONE
Sep 09 07:58:55.557 E ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 init container exited with code 1 (Error): 
Sep 09 07:59:12.971 E ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 init container exited with code 1 (Error): 
Sep 09 07:59:42.066 E ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 init container exited with code 1 (Error): 
Sep 09 07:59:52.208 E ns/e2e-container-lifecycle-hook-5163 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 container/pod-handle-http-request container exited with code 2 (Error): 
Sep 09 08:00:27.357 E ns/e2e-webhook-5534 pod/sample-webhook-deployment-7bc8486f8c-q5hmf node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook container exited with code 2 (Error): 
Sep 09 08:02:30.860 E ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 container/init2 container exited with code 1 (Error): 
Sep 09 08:02:30.860 E ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 08:02:30.860 E ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 container/init2 init container exited with code 1 (Error): 
Sep 09 08:04:40.628 E ns/e2e-deployment-9259 pod/test-rolling-update-deployment-5887db9c6b-q2wf9 node/ostest-5xqm8-worker-0-cbbx9 container/agnhost container exited with code 2 (Error): 
Sep 09 08:04:59.623 E ns/e2e-pods-7770 pod/pod-logs-websocket-29afede6-7736-4746-ab12-8ee88ea4c51d node/ostest-5xqm8-worker-0-rzx47 container/main container exited with code 137 (Error): 
Sep 09 08:05:56.817 E ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/querier container exited with code 137 (Error): 
Sep 09 08:05:56.817 E ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier container exited with code 137 (Error): 
Sep 09 08:05:56.817 E ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/webserver container exited with code 2 (Error): 
Sep 09 08:07:53.565 E ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-3 container exited with code 137 (Error): 
Sep 09 08:07:53.565 E ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-2 container exited with code 137 (Error): 
Sep 09 08:07:53.565 E ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-1 container exited with code 137 (Error): 
Sep 09 08:08:06.675 E ns/e2e-webhook-6116 pod/sample-webhook-deployment-7bc8486f8c-68lqc node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook container exited with code 2 (Error): 
Sep 09 08:08:38.005 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:38.411 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:38.853 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:39.135 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:39.493 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:39.859 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:40.235 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:40.671 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:41.099 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:41.422 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:41.795 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:42.125 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:42.354 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:43.800 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:44.272 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:44.666 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:45.175 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:45.468 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:45.882 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:46.154 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:46.605 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:47.012 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:47.302 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:47.503 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:47.831 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:48.056 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:48.271 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:48.665 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:49.158 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:49.671 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:50.625 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:51.606 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:52.733 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:53.097 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:53.396 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:53.605 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:53.830 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:54.045 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:54.357 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:54.886 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:55.128 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:55.555 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:55.965 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:56.243 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:56.435 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:56.643 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:56.906 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:57.203 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:57.511 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:57.891 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:58.094 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:58.379 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:58.609 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:58.805 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:59.111 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:59.718 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:00.188 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:01.175 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:01.528 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:01.949 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:02.338 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:02.781 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:03.155 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:03.421 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:03.721 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:04.011 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:04.259 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:04.766 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:05.083 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:05.291 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:05.514 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:05.835 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:06.071 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:06.360 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:06.561 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:06.903 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:07.132 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:07.420 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:07.627 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:07.790 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:07.917 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:08.167 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:08.363 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:08.530 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:08.700 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:08.902 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:09.298 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:09.822 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:10.716 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:11.366 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:11.765 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:12.008 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:12.179 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:12.294 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:12.554 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:13.011 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:13.156 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:13.307 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:13.475 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:13.745 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:13.909 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:14.041 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:14.397 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:14.621 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:14.864 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:15.204 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:15.407 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:15.683 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:16.129 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:16.549 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:16.851 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:17.216 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:17.416 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:17.626 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:17.812 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:17.934 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:18.072 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:18.375 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:18.598 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:18.768 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:19.031 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:19.346 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:19.680 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:19.916 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:20.214 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:20.958 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:21.434 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:21.721 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:22.065 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:22.488 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:23.235 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:23.472 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:23.657 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:23.832 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:24.106 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:24.341 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:24.777 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:25.012 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:25.537 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:25.942 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:26.259 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:26.424 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:26.591 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:26.749 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:26.879 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:27.014 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:27.257 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:27.536 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:27.821 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:28.043 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:28.215 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:28.395 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:28.630 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:28.828 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:28.981 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:29.157 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:29.374 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:29.590 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:29.825 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:29.961 E ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 node/ostest-5xqm8-worker-0-rzx47 container/client-container init container exited with code 2 (Error): 
Sep 09 08:09:29.961 E ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 08:09:29.961 E ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 node/ostest-5xqm8-worker-0-rzx47 container/client-container container exited with code 2 (Error): 
Sep 09 08:09:30.092 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:30.336 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:30.610 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:30.934 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:31.562 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:31.803 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:31.976 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:32.188 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:32.877 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:33.092 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:33.330 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:33.548 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:33.811 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:34.068 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:34.362 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:34.641 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:34.881 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:35.117 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:35.347 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:35.583 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:35.918 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:36.236 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:36.415 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:36.596 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:36.951 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:37.262 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:37.516 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:37.775 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:37.977 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:38.142 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:38.325 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:38.548 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:38.854 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:39.140 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:39.320 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:39.672 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:39.994 E ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver container exited with code 1 (Error): 
Sep 09 08:09:40.058 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:40.353 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:40.661 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:41.068 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:41.284 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:10:18.893 E ns/e2e-services-8942 pod/externalname-service-x6xw2 node/ostest-5xqm8-worker-0-twrlr container/externalname-service container exited with code 137 (Error): 
Sep 09 08:11:45.488 E ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 node/ostest-5xqm8-worker-0-rzx47 container/test-container init container exited with code 2 (Error): 
Sep 09 08:11:45.488 E ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 08:11:45.488 E ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 node/ostest-5xqm8-worker-0-rzx47 container/test-container container exited with code 2 (Error): 
Sep 09 08:12:55.713 E ns/e2e-kubelet-test-810 pod/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be node/ostest-5xqm8-worker-0-rzx47 container/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be init container exited with code 1 (Error): 
Sep 09 08:12:55.713 E ns/e2e-kubelet-test-810 pod/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 08:12:55.713 E ns/e2e-kubelet-test-810 pod/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be node/ostest-5xqm8-worker-0-rzx47 container/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be container exited with code 1 (Error): 
Sep 09 08:13:30.974 E ns/e2e-container-probe-7604 pod/busybox-9c1528aa-c7d1-4c0a-93bc-0c538464a676 node/ostest-5xqm8-worker-0-rzx47 container/busybox container exited with code 137 (Error): 
Sep 09 08:13:54.010 E ns/e2e-container-probe-9159 pod/liveness-aff7576c-81b3-4a99-b90f-d3f475d81615 node/ostest-5xqm8-worker-0-cbbx9 container/liveness container exited with code 2 (Error): 
Sep 09 08:15:07.042 E ns/e2e-pod-network-test-3433 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver container exited with code 2 (Error): 
Sep 09 08:15:15.531 E ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 node/ostest-5xqm8-worker-0-rzx47 container/update-demo container exited with code 2 (Error): 
Sep 09 08:15:20.508 E ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier container exited with code 137 (Error): 
Sep 09 08:15:20.508 E ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/querier container exited with code 137 (Error): 
Sep 09 08:15:20.508 E ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/webserver container exited with code 2 (Error): 
Sep 09 08:16:00.788 E ns/e2e-emptydir-wrapper-6204 pod/pod-secrets-0b0d8a6d-5c7a-4ab5-90cc-62ec60b30d2e node/ostest-5xqm8-worker-0-cbbx9 container/secret-test container exited with code 2 (Error): 
Sep 09 08:16:06.723 E ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa container exited with code 1 (Error): 
Sep 09 08:16:24.813 E ns/e2e-services-6570 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 container/pause container exited with code 2 (Error): 
Sep 09 08:17:19.382 E ns/e2e-container-runtime-2767 pod/terminate-cmd-rpofc9244bfd-6b10-4cbd-aca8-407baf547bd5 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpof container exited with code 1 (Error): 
Sep 09 08:18:19.568 E ns/e2e-services-7427 pod/execpod-affinityd4ssh node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause container exited with code 2 (Error): 
Sep 09 08:18:45.647 E ns/e2e-container-runtime-2767 pod/terminate-cmd-rpn2ee803ac-3de8-46fe-9277-997e4ca3a724 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpn init container exited with code 1 (Error): 
Sep 09 08:18:45.647 E ns/e2e-container-runtime-2767 pod/terminate-cmd-rpn2ee803ac-3de8-46fe-9277-997e4ca3a724 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (): 
Sep 09 08:18:45.647 E ns/e2e-container-runtime-2767 pod/terminate-cmd-rpn2ee803ac-3de8-46fe-9277-997e4ca3a724 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpn container exited with code 1 (Error): 
Sep 09 08:20:05.163 E ns/e2e-container-lifecycle-hook-3600 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 container/pod-handle-http-request container exited with code 2 (Error): 
Sep 09 08:21:31.131 E ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness container exited with code 2 (Error): 
Sep 09 08:21:39.143 E ns/e2e-job-4997 pod/fail-once-local-fn8r6 node/ostest-5xqm8-worker-0-rzx47 container/c container exited with code 1 (Error): 
Sep 09 08:21:43.983 E ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ostest-5xqm8-worker-0-cbbx9 container/c container exited with code 1 (Error): 
Sep 09 08:21:47.290 E ns/e2e-job-4997 pod/fail-once-local-9vpkp node/ostest-5xqm8-worker-0-rzx47 container/c container exited with code 1 (Error): 
Sep 09 08:21:53.995 E ns/e2e-job-4997 pod/fail-once-local-z5p5k node/ostest-5xqm8-worker-0-cbbx9 container/c container exited with code 1 (Error): 
Sep 09 08:23:08.321 E ns/e2e-webhook-6508 pod/sample-webhook-deployment-7bc8486f8c-7xz67 node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook container exited with code 2 (Error): 
Sep 09 08:23:17.525 E ns/e2e-webhook-4689 pod/sample-webhook-deployment-7bc8486f8c-cfqq7 node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook container exited with code 2 (Error): 
Sep 09 08:24:08.968 E kube-apiserver Kube API started failing: Get https://api.ostest.shiftstack.com:6443/api/v1/namespaces/kube-system?timeout=5s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 09 08:24:14.901 E ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica container exited with code 1 (Error): 
Sep 09 08:24:18.968 - 30s   E kube-apiserver Kube API is not responding to GET requests
Sep 09 08:24:18.968 - 30s   E oauth-apiserver OAuth API is not responding to GET requests
Sep 09 08:24:18.968 - 30s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 09 08:24:32.896 E ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica container exited with code 1 (Error): 
Sep 09 08:24:33.639 E ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica container exited with code 1 (Error): 
Sep 09 08:24:52.149 E ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (DeadlineExceeded): Pod was active on the node longer than the specified deadline
Sep 09 08:25:02.350 E ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica container exited with code 1 (Error): 
Sep 09 08:25:07.136 E ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica container exited with code 1 (Error): 
Sep 09 08:26:53.939 E ns/e2e-services-252 pod/kube-proxy-mode-detector node/ostest-5xqm8-worker-0-rzx47 container/detector container exited with code 2 (Error): 
Sep 09 08:26:54.522 E ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints container exited with code 255 (Error): 32\n\ngoroutine 1086 [select]:\nk8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000e68d80)\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\ncreated by k8s.io/client-go/util/workqueue.newDelayingQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\n\ngoroutine 1100 [chan receive]:\nk8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000f51080)\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\ncreated by k8s.io/client-go/util/workqueue.newQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\n\ngoroutine 1102 [select]:\nk8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000f512c0)\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\ncreated by k8s.io/client-go/util/workqueue.newDelayingQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\n\ngoroutine 1109 [chan receive]:\nk8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000f515c0)\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\ncreated by k8s.io/client-go/util/workqueue.newQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\n\ngoroutine 1111 [select]:\nk8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000f517a0)\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\ncreated by k8s.io/client-go/util/workqueue.newDelayingQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\n\ngoroutine 1118 [chan receive]:\nk8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000f51aa0)\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\ncreated by k8s.io/client-go/util/workqueue.newQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\n\ngoroutine 1120 [select]:\nk8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000f51c80)\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\ncreated by k8s.io/client-go/util/workqueue.newDelayingQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\n
Sep 09 08:29:01.028 E clusteroperator/kube-apiserver changed Degraded to True: StaticPods_Error: StaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container "kube-apiserver-check-endpoints" is not ready: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container "kube-apiserver-check-endpoints" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)
Sep 09 08:29:01.968 E kube-apiserver Kube API started failing: Get https://api.ostest.shiftstack.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded
Sep 09 08:29:03.968 - 14s   E kube-apiserver Kube API is not responding to GET requests
Sep 09 08:29:03.968 - 14s   E oauth-apiserver OAuth API is not responding to GET requests
Sep 09 08:29:03.968 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 09 08:29:58.693 E kube-apiserver failed contacting the API: Timeout: Too large resource version: 886455, current: 683998
Sep 09 08:29:59.794 E ns/e2e-pod-network-test-720 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver container exited with code 2 (Error): 
Sep 09 08:30:00.084 E ns/e2e-pod-network-test-720 pod/test-container-pod node/ostest-5xqm8-worker-0-rzx47 container/webserver container exited with code 2 (Error): 
Sep 09 08:30:09.150 E ns/e2e-services-252 pod/affinity-clusterip-timeout-w9fc2 node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip-timeout container exited with code 137 (Error): 
Sep 09 08:30:12.140 E ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints container exited with code 255 (Error): _queue.go:68 +0x184\n\ngoroutine 945 [chan receive]:\nk8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000e10de0)\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\ncreated by k8s.io/client-go/util/workqueue.newQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\n\ngoroutine 1066 [chan receive]:\nk8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000b82de0)\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\ncreated by k8s.io/client-go/util/workqueue.newQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\n\ngoroutine 1068 [select]:\nk8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000b83f20)\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\ncreated by k8s.io/client-go/util/workqueue.newDelayingQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\n\ngoroutine 1075 [chan receive]:\nk8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000d08d80)\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\ncreated by k8s.io/client-go/util/workqueue.newQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\n\ngoroutine 1077 [select]:\nk8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000d09da0)\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\ncreated by k8s.io/client-go/util/workqueue.newDelayingQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\n\ngoroutine 1038 [chan receive]:\nk8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000c485a0)\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\ncreated by k8s.io/client-go/util/workqueue.newQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\n\ngoroutine 1139 [select]:\nk8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000e111a0)\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\ncreated by k8s.io/client-go/util/workqueue.newDelayingQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\n
Sep 09 08:31:12.707 E clusteroperator/authentication changed Degraded to True: WellKnownReadyController_SyncError: WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2
Sep 09 08:32:21.495 E ns/e2e-job-3562 pod/foo-wqldk node/ostest-5xqm8-worker-0-cbbx9 container/c init container exited with code 137 (Error): 
Sep 09 08:32:21.495 E ns/e2e-job-3562 pod/foo-wqldk node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (): 
Sep 09 08:32:21.495 E ns/e2e-job-3562 pod/foo-wqldk node/ostest-5xqm8-worker-0-cbbx9 container/c container exited with code 137 (Error): 
Sep 09 08:32:21.801 E ns/e2e-job-3562 pod/foo-42j4q node/ostest-5xqm8-worker-0-rzx47 container/c init container exited with code 137 (Error): 
Sep 09 08:32:21.801 E ns/e2e-job-3562 pod/foo-42j4q node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 08:32:21.801 E ns/e2e-job-3562 pod/foo-42j4q node/ostest-5xqm8-worker-0-rzx47 container/c container exited with code 137 (Error): 
Sep 09 08:32:29.621 E ns/e2e-prestop-8695 pod/tester node/ostest-5xqm8-worker-0-cbbx9 container/tester container exited with code 137 (Error): 
Sep 09 08:38:27.314 E ns/openshift-monitoring pod/prometheus-adapter-58b5c9d9c7-th8l7 node/ostest-5xqm8-worker-0-rzx47 container/prometheus-adapter container exited with code 2 (Error): er scope\nE0909 06:34:00.571702       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0909 07:03:37.662017       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0909 07:03:37.662170       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0909 07:03:37.725424       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0909 07:03:37.725671       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0909 08:13:38.991438       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0909 08:13:38.991674       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\n
Sep 09 08:54:15.385 E ns/e2e-daemonsets-7406 pod/daemon-set-swnph node/ostest-5xqm8-worker-0-cbbx9 container/app container exited with code 2 (Error): 
Sep 09 08:57:14.999 E ns/e2e-daemonsets-8004 pod/daemon-set-86vds node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 

Stdout
Timeline:

Sep 09 07:54:21.786 I ns/e2e-pod-network-test-9567 pod/netserver-0 node/ reason/Created
Sep 09 07:54:21.886 I ns/e2e-pod-network-test-9567 pod/netserver-1 node/ reason/Created
Sep 09 07:54:21.989 I ns/e2e-pod-network-test-9567 pod/netserver-2 node/ reason/Created
Sep 09 07:54:22.056 I ns/e2e-pod-network-test-9567 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:54:22.056 I ns/e2e-webhook-5176 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 07:54:22.101 I ns/e2e-pod-network-test-9567 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:54:22.101 I ns/e2e-downward-api-2148 pod/downwardapi-volume-47ee370f-3524-45d8-adee-652d65ce4592 node/ reason/Created
Sep 09 07:54:22.152 I ns/e2e-kubectl-7678 pod/pause node/ reason/Created
Sep 09 07:54:22.181 I ns/e2e-pod-network-test-9567 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 07:54:22.287 I ns/e2e-webhook-5176 pod/sample-webhook-deployment-7bc8486f8c-g7jrq node/ reason/Created
Sep 09 07:54:22.405 I ns/e2e-webhook-5176 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-g7jrq
Sep 09 07:54:22.450 I ns/e2e-kubectl-7678 pod/pause node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:54:22.499 I ns/e2e-downward-api-2148 pod/downwardapi-volume-47ee370f-3524-45d8-adee-652d65ce4592 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:54:22.631 I ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ reason/Created
Sep 09 07:54:22.636 I ns/e2e-webhook-5176 pod/sample-webhook-deployment-7bc8486f8c-g7jrq node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:54:22.780 I ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:54:23.269 W ns/e2e-pod-network-test-9567 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedMount MountVolume.SetUp failed for volume "default-token-sv6rn" : failed to sync secret cache: timed out waiting for the condition
Sep 09 07:54:23.296 I ns/e2e-pods-4933 pod/pod-update-1cebc735-bd67-4577-9d0e-64f86b4af7ca node/ reason/Created
Sep 09 07:54:23.404 I ns/e2e-pods-4933 pod/pod-update-1cebc735-bd67-4577-9d0e-64f86b4af7ca node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:54:23.507 I ns/e2e-secrets-3700 pod/pod-secrets-4cdc7205-1abf-4fb0-b523-0d27dd77324b node/ reason/Created
Sep 09 07:54:23.647 I ns/e2e-projected-8526 pod/pod-projected-configmaps-eb5c5238-902f-4f2b-9f5f-8f7e402398ba node/ reason/Created
Sep 09 07:54:23.647 I ns/e2e-secrets-3700 pod/pod-secrets-4cdc7205-1abf-4fb0-b523-0d27dd77324b node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:54:23.706 I ns/e2e-projected-8526 pod/pod-projected-configmaps-eb5c5238-902f-4f2b-9f5f-8f7e402398ba node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:54:23.819 W ns/e2e-webhook-5176 pod/sample-webhook-deployment-7bc8486f8c-g7jrq node/ostest-5xqm8-worker-0-rzx47 reason/FailedMount MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
Sep 09 07:54:25.131 I ns/e2e-dns-8435 pod/e2e-dns-8435 node/ reason/Created
Sep 09 07:54:25.184 I ns/e2e-dns-8435 pod/e2e-dns-8435 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:54:30.337 W ns/test pod/demo node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Sep 09 07:54:42.458 W ns/test pod/demo node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Sep 09 07:54:46.647 I ns/e2e-pod-network-test-9567 pod/netserver-0 reason/AddedInterface Add eth0 [10.128.133.25/23]
Sep 09 07:54:47.441 I ns/e2e-pod-network-test-9567 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulling image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:54:48.509 I ns/e2e-kubectl-7678 pod/pause reason/AddedInterface Add eth0 [10.128.123.207/23]
Sep 09 07:54:48.524 I ns/e2e-pods-4933 pod/pod-update-1cebc735-bd67-4577-9d0e-64f86b4af7ca reason/AddedInterface Add eth0 [10.128.125.181/23]
Sep 09 07:54:49.203 I ns/e2e-kubectl-7678 pod/pause node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Pulling image/k8s.gcr.io/pause:3.2
Sep 09 07:54:49.203 I ns/e2e-pods-4933 pod/pod-update-1cebc735-bd67-4577-9d0e-64f86b4af7ca node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Pulling image/docker.io/library/nginx:1.14-alpine
Sep 09 07:54:50.951 I ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e reason/AddedInterface Add eth0 [10.128.120.76/23]
Sep 09 07:54:51.640 I ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Pulling image/docker.io/library/busybox:1.29
Sep 09 07:54:51.740 I ns/e2e-projected-8526 pod/pod-projected-configmaps-eb5c5238-902f-4f2b-9f5f-8f7e402398ba reason/AddedInterface Add eth0 [10.128.130.48/23]
Sep 09 07:54:52.432 W ns/test pod/demo node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Sep 09 07:54:52.482 I ns/e2e-projected-8526 pod/pod-projected-configmaps-eb5c5238-902f-4f2b-9f5f-8f7e402398ba node/ostest-5xqm8-worker-0-cbbx9 container/projected-configmap-volume-test reason/Pulling image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:54:52.624 I ns/e2e-kubectl-7678 pod/pause node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 07:54:52.887 I ns/e2e-kubectl-7678 pod/pause node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Created
Sep 09 07:54:52.937 I ns/e2e-kubectl-7678 pod/pause node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Started
Sep 09 07:54:53.309 I ns/e2e-kubectl-7678 pod/pause node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Ready
Sep 09 07:54:53.487 I ns/e2e-pod-network-test-9567 pod/netserver-2 reason/AddedInterface Add eth0 [10.128.132.207/23]
Sep 09 07:54:54.215 I ns/e2e-pod-network-test-9567 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Pulling image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:54:55.660 W ns/e2e-kubectl-7678 pod/pause node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:54:56.258 I ns/e2e-dns-8435 pod/e2e-dns-8435 reason/AddedInterface Add eth0 [10.128.137.169/23]
Sep 09 07:54:56.945 I ns/e2e-dns-8435 pod/e2e-dns-8435 node/ostest-5xqm8-worker-0-rzx47 container/agnhost reason/Pulling image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:54:57.142 W ns/e2e-kubectl-7678 pod/pause node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:54:57.221 I ns/e2e-kubectl-7678 pod/pause node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Killing
Sep 09 07:54:57.328 I ns/e2e-kubectl-7678 pod/pause node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Killing
Sep 09 07:54:58.476 I ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 07:54:58.788 I ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Created
Sep 09 07:54:58.831 I ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Started
Sep 09 07:54:59.007 I ns/e2e-pod-network-test-9567 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:54:59.035 I ns/e2e-projected-8526 pod/pod-projected-configmaps-eb5c5238-902f-4f2b-9f5f-8f7e402398ba node/ostest-5xqm8-worker-0-cbbx9 container/projected-configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:54:59.361 I ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Ready
Sep 09 07:54:59.403 I ns/e2e-projected-8526 pod/pod-projected-configmaps-eb5c5238-902f-4f2b-9f5f-8f7e402398ba node/ostest-5xqm8-worker-0-cbbx9 container/projected-configmap-volume-test reason/Created
Sep 09 07:54:59.435 I ns/e2e-pod-network-test-9567 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 07:54:59.563 I ns/e2e-pod-network-test-9567 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 07:54:59.891 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ reason/Created
Sep 09 07:54:59.970 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:55:00.024 I ns/e2e-projected-8526 pod/pod-projected-configmaps-eb5c5238-902f-4f2b-9f5f-8f7e402398ba node/ostest-5xqm8-worker-0-cbbx9 container/projected-configmap-volume-test reason/Started
Sep 09 07:55:01.734 I ns/e2e-pods-4933 pod/pod-update-1cebc735-bd67-4577-9d0e-64f86b4af7ca node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Pulled image/docker.io/library/nginx:1.14-alpine
Sep 09 07:55:01.996 W ns/e2e-projected-8526 pod/pod-projected-configmaps-eb5c5238-902f-4f2b-9f5f-8f7e402398ba node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:55:02.032 I ns/e2e-pods-4933 pod/pod-update-1cebc735-bd67-4577-9d0e-64f86b4af7ca node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Created
Sep 09 07:55:02.080 W ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 07:55:02.095 I ns/e2e-pods-4933 pod/pod-update-1cebc735-bd67-4577-9d0e-64f86b4af7ca node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Started
Sep 09 07:55:02.221 I ns/e2e-pods-4933 pod/pod-update-1cebc735-bd67-4577-9d0e-64f86b4af7ca node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Ready
Sep 09 07:55:02.390 I ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Killing
Sep 09 07:55:02.465 W ns/test pod/demo node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Sep 09 07:55:05.056 W ns/e2e-projected-8526 pod/pod-projected-configmaps-eb5c5238-902f-4f2b-9f5f-8f7e402398ba node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:55:05.181 I ns/e2e-pod-network-test-9567 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:55:05.411 I ns/e2e-pod-network-test-9567 pod/netserver-1 reason/AddedInterface Add eth0 [10.128.133.28/23]
Sep 09 07:55:05.516 I ns/e2e-pod-network-test-9567 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Created
Sep 09 07:55:05.799 I ns/e2e-pod-network-test-9567 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Started
Sep 09 07:55:06.208 I ns/e2e-pod-network-test-9567 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Pulling image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:55:06.270 I ns/e2e-configmap-6692 pod/pod-configmaps-9a7a2ee2-532d-44e9-8a79-273b7e647809 node/ reason/Created
Sep 09 07:55:06.310 I ns/e2e-webhook-5176 pod/sample-webhook-deployment-7bc8486f8c-g7jrq reason/AddedInterface Add eth0 [10.128.143.96/23]
Sep 09 07:55:06.366 I ns/e2e-configmap-6692 pod/pod-configmaps-9a7a2ee2-532d-44e9-8a79-273b7e647809 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:55:06.586 I ns/e2e-pod-network-test-9567 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:55:06.605 I ns/e2e-dns-8435 pod/e2e-dns-8435 node/ostest-5xqm8-worker-0-rzx47 container/agnhost reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:55:06.938 I ns/e2e-pod-network-test-9567 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Created
Sep 09 07:55:06.964 I ns/e2e-dns-8435 pod/e2e-dns-8435 node/ostest-5xqm8-worker-0-rzx47 container/agnhost reason/Created
Sep 09 07:55:07.025 I ns/e2e-pod-network-test-9567 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Started
Sep 09 07:55:07.046 I ns/e2e-dns-8435 pod/e2e-dns-8435 node/ostest-5xqm8-worker-0-rzx47 container/agnhost reason/Started
Sep 09 07:55:07.083 I ns/e2e-pods-7111 pod/pod-hostip-781ce8f4-2edb-4890-9396-483d19927951 node/ reason/Created
Sep 09 07:55:07.117 I ns/e2e-pods-7111 pod/pod-hostip-781ce8f4-2edb-4890-9396-483d19927951 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:55:07.257 I ns/e2e-webhook-5176 pod/sample-webhook-deployment-7bc8486f8c-g7jrq node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:55:07.415 I ns/e2e-dns-8435 pod/e2e-dns-8435 node/ostest-5xqm8-worker-0-rzx47 container/agnhost reason/Ready
Sep 09 07:55:07.531 I ns/e2e-webhook-5176 pod/sample-webhook-deployment-7bc8486f8c-g7jrq node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Created
Sep 09 07:55:07.594 I ns/e2e-webhook-5176 pod/sample-webhook-deployment-7bc8486f8c-g7jrq node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Started
Sep 09 07:55:07.700 I ns/e2e-downward-api-2148 pod/downwardapi-volume-47ee370f-3524-45d8-adee-652d65ce4592 reason/AddedInterface Add eth0 [10.128.139.20/23]
Sep 09 07:55:08.425 I ns/e2e-downward-api-2148 pod/downwardapi-volume-47ee370f-3524-45d8-adee-652d65ce4592 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:55:08.721 I ns/e2e-downward-api-2148 pod/downwardapi-volume-47ee370f-3524-45d8-adee-652d65ce4592 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Created
Sep 09 07:55:08.828 I ns/e2e-downward-api-2148 pod/downwardapi-volume-47ee370f-3524-45d8-adee-652d65ce4592 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Started
Sep 09 07:55:09.393 I ns/e2e-webhook-5176 pod/sample-webhook-deployment-7bc8486f8c-g7jrq node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Ready
Sep 09 07:55:09.715 W ns/e2e-dns-8435 pod/e2e-dns-8435 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:55:09.738 I ns/e2e-dns-8435 pod/e2e-dns-8435 node/ostest-5xqm8-worker-0-rzx47 container/agnhost reason/Killing
Sep 09 07:55:09.910 I ns/e2e-pod-network-test-9567 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 07:55:10.585 W ns/e2e-downward-api-2148 pod/downwardapi-volume-47ee370f-3524-45d8-adee-652d65ce4592 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:55:11.000 I ns/e2e-projected-1939 pod/pod-projected-configmaps-de4c4671-e101-4256-80cd-71c86bbdbcae node/ reason/Created
Sep 09 07:55:11.044 I ns/e2e-projected-1939 pod/pod-projected-configmaps-de4c4671-e101-4256-80cd-71c86bbdbcae node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:55:11.459 W ns/e2e-dns-8435 pod/e2e-dns-8435 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:55:11.459 W ns/e2e-dns-8435 pod/e2e-dns-8435 node/ostest-5xqm8-worker-0-rzx47 container/agnhost reason/NotReady
Sep 09 07:55:12.101 W ns/e2e-pods-4933 pod/pod-update-1cebc735-bd67-4577-9d0e-64f86b4af7ca node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 07:55:12.441 W ns/test pod/demo node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Sep 09 07:55:13.403 W ns/e2e-webhook-5176 pod/sample-webhook-deployment-7bc8486f8c-g7jrq node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:55:14.485 E ns/e2e-webhook-5176 pod/sample-webhook-deployment-7bc8486f8c-g7jrq node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook container exited with code 2 (Error): 
Sep 09 07:55:14.608 W ns/e2e-dns-8435 pod/e2e-dns-8435 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:55:14.738 I ns/e2e-secrets-3700 pod/pod-secrets-4cdc7205-1abf-4fb0-b523-0d27dd77324b reason/AddedInterface Add eth0 [10.128.134.164/23]
Sep 09 07:55:14.816 W ns/e2e-downward-api-2148 pod/downwardapi-volume-47ee370f-3524-45d8-adee-652d65ce4592 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:55:15.433 I ns/e2e-secrets-3700 pod/pod-secrets-4cdc7205-1abf-4fb0-b523-0d27dd77324b node/ostest-5xqm8-worker-0-rzx47 container/secret-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:55:15.679 I ns/e2e-secrets-3700 pod/pod-secrets-4cdc7205-1abf-4fb0-b523-0d27dd77324b node/ostest-5xqm8-worker-0-rzx47 container/secret-volume-test reason/Created
Sep 09 07:55:15.744 I ns/e2e-secrets-3700 pod/pod-secrets-4cdc7205-1abf-4fb0-b523-0d27dd77324b node/ostest-5xqm8-worker-0-rzx47 container/secret-volume-test reason/Started
Sep 09 07:55:16.870 I ns/e2e-kubectl-9287 pod/agnhost-primary-9mvpm node/ reason/Created
Sep 09 07:55:16.940 I ns/e2e-kubectl-9287 replicationcontroller/agnhost-primary reason/SuccessfulCreate Created pod: agnhost-primary-9mvpm
Sep 09 07:55:16.945 W ns/e2e-pods-4933 pod/pod-update-1cebc735-bd67-4577-9d0e-64f86b4af7ca node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:55:16.993 I ns/e2e-kubectl-9287 pod/agnhost-primary-9mvpm node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:55:17.019 W ns/e2e-webhook-5176 pod/sample-webhook-deployment-7bc8486f8c-g7jrq node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:55:18.160 W ns/e2e-secrets-3700 pod/pod-secrets-4cdc7205-1abf-4fb0-b523-0d27dd77324b node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:55:18.786 I ns/e2e-secrets-6176 pod/pod-secrets-5b292aeb-8295-44ba-9b81-ece5a128b4aa node/ reason/Created
Sep 09 07:55:18.853 I ns/e2e-secrets-6176 pod/pod-secrets-5b292aeb-8295-44ba-9b81-ece5a128b4aa node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:55:19.868 I ns/e2e-pod-network-test-9567 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Ready
Sep 09 07:55:21.832 W ns/e2e-secrets-3700 pod/pod-secrets-4cdc7205-1abf-4fb0-b523-0d27dd77324b node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:55:22.662 W ns/test pod/demo node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Sep 09 07:55:24.019 I ns/e2e-projected-5514 pod/pod-projected-configmaps-bca2160f-6807-4678-87d2-7cf5a54cc1f7 node/ reason/Created
Sep 09 07:55:24.121 I ns/e2e-projected-5514 pod/pod-projected-configmaps-bca2160f-6807-4678-87d2-7cf5a54cc1f7 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:55:24.824 I ns/e2e-pod-network-test-9567 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 07:55:26.370 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (94 times)
Sep 09 07:55:26.427 I ns/e2e-pod-network-test-9567 pod/test-container-pod node/ reason/Created
Sep 09 07:55:26.549 I ns/e2e-pod-network-test-9567 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:55:29.090 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a reason/AddedInterface Add eth0 [10.128.119.62/23]
Sep 09 07:55:29.708 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/delcm-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:55:29.995 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/delcm-volume-test reason/Created
Sep 09 07:55:30.041 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/delcm-volume-test reason/Started
Sep 09 07:55:30.054 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/updcm-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:55:30.280 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/updcm-volume-test reason/Created
Sep 09 07:55:30.332 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/updcm-volume-test reason/Started
Sep 09 07:55:30.380 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/createcm-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:55:30.612 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/createcm-volume-test reason/Created
Sep 09 07:55:30.694 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/createcm-volume-test reason/Started
Sep 09 07:55:31.536 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/createcm-volume-test reason/Ready
Sep 09 07:55:31.536 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/delcm-volume-test reason/Ready
Sep 09 07:55:31.536 I ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/updcm-volume-test reason/Ready
Sep 09 07:55:32.459 W ns/test pod/demo node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Sep 09 07:55:33.556 E ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 container/dapi-container init container exited with code 137 (Error): 
Sep 09 07:55:33.556 E ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 07:55:33.556 E ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 container/dapi-container container exited with code 137 (Error): 
Sep 09 07:55:37.295 W ns/e2e-var-expansion-465 pod/var-expansion-41b3d148-3142-4ae5-ad47-d7632969b66e node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:55:37.383 I ns/e2e-pods-7114 pod/pod-exec-websocket-df7f1cfe-3d93-4786-b624-c5610e7513a9 node/ reason/Created
Sep 09 07:55:37.497 I ns/e2e-pods-7114 pod/pod-exec-websocket-df7f1cfe-3d93-4786-b624-c5610e7513a9 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:55:38.117 I ns/e2e-subpath-4773 pod/pod-subpath-test-secret-755m node/ reason/Created
Sep 09 07:55:38.196 I ns/e2e-subpath-4773 pod/pod-subpath-test-secret-755m node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:55:39.090 I ns/e2e-secrets-4837 pod/pod-secrets-60b4b229-8a34-42ec-802f-70906f2fc0bd node/ reason/Created
Sep 09 07:55:39.125 I ns/e2e-secrets-4837 pod/pod-secrets-60b4b229-8a34-42ec-802f-70906f2fc0bd node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:55:41.192 I ns/e2e-projected-1939 pod/pod-projected-configmaps-de4c4671-e101-4256-80cd-71c86bbdbcae reason/AddedInterface Add eth0 [10.128.146.26/23]
Sep 09 07:55:41.887 I ns/e2e-projected-1939 pod/pod-projected-configmaps-de4c4671-e101-4256-80cd-71c86bbdbcae node/ostest-5xqm8-worker-0-cbbx9 container/projected-configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:55:42.354 I ns/e2e-projected-1939 pod/pod-projected-configmaps-de4c4671-e101-4256-80cd-71c86bbdbcae node/ostest-5xqm8-worker-0-cbbx9 container/projected-configmap-volume-test reason/Created
Sep 09 07:55:42.386 I ns/e2e-configmap-6692 pod/pod-configmaps-9a7a2ee2-532d-44e9-8a79-273b7e647809 reason/AddedInterface Add eth0 [10.128.128.27/23]
Sep 09 07:55:42.431 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (11 times)
Sep 09 07:55:42.512 I ns/e2e-pods-7111 pod/pod-hostip-781ce8f4-2edb-4890-9396-483d19927951 reason/AddedInterface Add eth0 [10.128.144.190/23]
Sep 09 07:55:42.579 I ns/e2e-projected-1939 pod/pod-projected-configmaps-de4c4671-e101-4256-80cd-71c86bbdbcae node/ostest-5xqm8-worker-0-cbbx9 container/projected-configmap-volume-test reason/Started
Sep 09 07:55:43.062 W ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 07:55:43.250 I ns/e2e-configmap-6692 pod/pod-configmaps-9a7a2ee2-532d-44e9-8a79-273b7e647809 node/ostest-5xqm8-worker-0-cbbx9 container/configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:55:43.316 I ns/e2e-pods-7111 pod/pod-hostip-781ce8f4-2edb-4890-9396-483d19927951 node/ostest-5xqm8-worker-0-cbbx9 container/test reason/Pulling image/k8s.gcr.io/pause:3.2
Sep 09 07:55:43.721 I ns/e2e-configmap-6692 pod/pod-configmaps-9a7a2ee2-532d-44e9-8a79-273b7e647809 node/ostest-5xqm8-worker-0-cbbx9 container/configmap-volume-test reason/Created
Sep 09 07:55:43.736 W ns/test pod/demo node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Sep 09 07:55:43.875 I ns/e2e-configmap-6692 pod/pod-configmaps-9a7a2ee2-532d-44e9-8a79-273b7e647809 node/ostest-5xqm8-worker-0-cbbx9 container/configmap-volume-test reason/Started
Sep 09 07:55:44.628 W ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:55:44.628 W ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/createcm-volume-test reason/NotReady
Sep 09 07:55:44.628 W ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/delcm-volume-test reason/NotReady
Sep 09 07:55:44.628 W ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 container/updcm-volume-test reason/NotReady
Sep 09 07:55:44.814 W ns/e2e-configmap-6692 pod/pod-configmaps-9a7a2ee2-532d-44e9-8a79-273b7e647809 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:55:45.353 W ns/e2e-projected-1939 pod/pod-projected-configmaps-de4c4671-e101-4256-80cd-71c86bbdbcae node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:55:46.330 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (95 times)
Sep 09 07:55:46.889 I ns/e2e-pods-7111 pod/pod-hostip-781ce8f4-2edb-4890-9396-483d19927951 node/ostest-5xqm8-worker-0-cbbx9 container/test reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 07:55:47.176 I ns/e2e-pods-7111 pod/pod-hostip-781ce8f4-2edb-4890-9396-483d19927951 node/ostest-5xqm8-worker-0-cbbx9 container/test reason/Created
Sep 09 07:55:47.239 I ns/e2e-pods-7111 pod/pod-hostip-781ce8f4-2edb-4890-9396-483d19927951 node/ostest-5xqm8-worker-0-cbbx9 container/test reason/Started
Sep 09 07:55:47.518 I ns/e2e-pods-7111 pod/pod-hostip-781ce8f4-2edb-4890-9396-483d19927951 node/ostest-5xqm8-worker-0-cbbx9 container/test reason/Ready
Sep 09 07:55:47.759 W ns/e2e-projected-2924 pod/pod-projected-configmaps-e83357b3-8d3b-4139-90f8-23425637ec9a node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:55:49.379 W ns/e2e-projected-1939 pod/pod-projected-configmaps-de4c4671-e101-4256-80cd-71c86bbdbcae node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:55:49.912 W ns/e2e-configmap-6692 pod/pod-configmaps-9a7a2ee2-532d-44e9-8a79-273b7e647809 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:55:50.330 I ns/e2e-gc-3757 pod/simpletest.rc-6h6cd node/ reason/Created
Sep 09 07:55:50.369 I ns/e2e-gc-3757 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-6h6cd
Sep 09 07:55:50.428 I ns/e2e-gc-3757 pod/simpletest.rc-6h6cd node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:55:50.438 I ns/e2e-gc-3757 pod/simpletest.rc-428zn node/ reason/Created
Sep 09 07:55:50.460 I ns/e2e-gc-3757 pod/simpletest.rc-d67kf node/ reason/Created
Sep 09 07:55:50.501 I ns/e2e-gc-3757 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-428zn
Sep 09 07:55:50.518 I ns/e2e-gc-3757 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-d67kf
Sep 09 07:55:50.572 I ns/e2e-gc-3757 pod/simpletest.rc-428zn node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:55:50.592 I ns/e2e-gc-3757 pod/simpletest.rc-5xjds node/ reason/Created
Sep 09 07:55:50.630 I ns/e2e-gc-3757 pod/simpletest.rc-fchhv node/ reason/Created
Sep 09 07:55:50.632 I ns/e2e-gc-3757 pod/simpletest.rc-d8dsd node/ reason/Created
Sep 09 07:55:50.633 I ns/e2e-gc-3757 pod/simpletest.rc-d67kf node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 07:55:50.633 I ns/e2e-gc-3757 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-5xjds
Sep 09 07:55:50.633 I ns/e2e-gc-3757 pod/simpletest.rc-lgp2z node/ reason/Created
Sep 09 07:55:50.717 I ns/e2e-gc-3757 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-fchhv
Sep 09 07:55:50.726 I ns/e2e-gc-3757 pod/simpletest.rc-fchhv node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:55:50.745 I ns/e2e-gc-3757 pod/simpletest.rc-5xjds node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:55:50.764 I ns/e2e-gc-3757 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-lgp2z
Sep 09 07:55:50.772 I ns/e2e-gc-3757 pod/simpletest.rc-d8dsd node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 07:55:50.868 I ns/e2e-gc-3757 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-d8dsd
Sep 09 07:55:50.877 I ns/e2e-gc-3757 pod/simpletest.rc-h65xq node/ reason/Created
Sep 09 07:55:50.878 I ns/e2e-gc-3757 pod/simpletest.rc-phrqw node/ reason/Created
Sep 09 07:55:50.904 I ns/e2e-gc-3757 pod/simpletest.rc-lgp2z node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:55:50.935 I ns/e2e-gc-3757 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-phrqw
Sep 09 07:55:50.935 I ns/e2e-gc-3757 pod/simpletest.rc-5qq6m node/ reason/Created
Sep 09 07:55:50.990 I ns/e2e-gc-3757 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-h65xq
Sep 09 07:55:51.004 I ns/e2e-gc-3757 pod/simpletest.rc-h65xq node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:55:51.058 I ns/e2e-gc-3757 replicationcontroller/simpletest.rc reason/SuccessfulCreate (combined from similar events): Created pod: simpletest.rc-5qq6m
Sep 09 07:55:51.058 I ns/e2e-gc-3757 pod/simpletest.rc-phrqw node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 07:55:51.077 I ns/e2e-gc-3757 pod/simpletest.rc-5qq6m node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:55:51.371 W ns/e2e-gc-3757 pod/simpletest.rc-428zn node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_simpletest.rc-428zn_e2e-gc-3757_eed17169-b7e4-4f6e-995a-cfb420ddbf49_0(756e7368cae131883868c8feecda18bb4a9c1ce71b878a968139bb756d0cbaa4): [e2e-gc-3757/simpletest.rc-428zn:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep 09 07:55:51.410 W ns/e2e-gc-3757 pod/simpletest.rc-fchhv node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_simpletest.rc-fchhv_e2e-gc-3757_ee261b92-0a49-441c-b603-4ba9c82ab9fd_0(ebfd0f2d06a1490635eae15d33fa050f20f0157d125510daefafbab60b26bd2d): [e2e-gc-3757/simpletest.rc-fchhv:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep 09 07:55:51.600 W ns/e2e-gc-3757 pod/simpletest.rc-6h6cd node/ostest-5xqm8-worker-0-rzx47 reason/FailedMount MountVolume.SetUp failed for volume "default-token-s4r4n" : failed to sync secret cache: timed out waiting for the condition
Sep 09 07:55:51.960 W ns/e2e-gc-3757 pod/simpletest.rc-h65xq node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_simpletest.rc-h65xq_e2e-gc-3757_d9d10ee4-4b86-4032-9acf-545a78fc2d24_0(f5a5841889674c80d29a830d6b64eac5bde3acf85fb9440b9ecd40ca35f5ea40): [e2e-gc-3757/simpletest.rc-h65xq:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep 09 07:55:52.073 I ns/e2e-projected-8418 pod/pod-projected-secrets-a3703a33-33ce-4d42-9336-ea692f1c90e6 node/ reason/Created
Sep 09 07:55:52.310 I ns/e2e-projected-8418 pod/pod-projected-secrets-a3703a33-33ce-4d42-9336-ea692f1c90e6 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 07:55:52.414 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: Get "http://10.196.2.198:8090/alive": EOF
Sep 09 07:55:53.256 I ns/e2e-secrets-4227 pod/pod-secrets-4944a30d-cad0-48cd-ae94-33601a72249f node/ reason/Created
Sep 09 07:55:53.342 I ns/e2e-secrets-4227 pod/pod-secrets-4944a30d-cad0-48cd-ae94-33601a72249f node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:55:53.427 W ns/test pod/demo node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Sep 09 07:55:55.496 W ns/e2e-gc-3757 pod/simpletest.rc-5xjds node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:55:55.496 W ns/e2e-gc-3757 pod/simpletest.rc-5qq6m node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:55:55.498 W ns/e2e-gc-3757 pod/simpletest.rc-fchhv node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:55:55.534 W ns/e2e-gc-3757 pod/simpletest.rc-h65xq node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:55:55.535 W ns/e2e-gc-3757 pod/simpletest.rc-428zn node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:55:55.539 W ns/e2e-gc-3757 pod/simpletest.rc-lgp2z node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:55:55.541 W ns/e2e-gc-3757 pod/simpletest.rc-phrqw node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 07:55:55.544 W ns/e2e-gc-3757 pod/simpletest.rc-d8dsd node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 07:55:55.544 W ns/e2e-gc-3757 pod/simpletest.rc-d67kf node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 07:55:55.593 W ns/e2e-gc-3757 pod/simpletest.rc-6h6cd node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:55:56.362 W ns/e2e-pods-7111 pod/pod-hostip-781ce8f4-2edb-4890-9396-483d19927951 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 07:55:56.844 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.196.2.198:8090/ready": EOF
Sep 09 07:56:02.347 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: Get "http://10.196.2.198:8090/alive": EOF (2 times)
Sep 09 07:56:06.183 I ns/e2e-projected-5514 pod/pod-projected-configmaps-bca2160f-6807-4678-87d2-7cf5a54cc1f7 reason/AddedInterface Add eth0 [10.128.152.171/23]
Sep 09 07:56:06.842 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.196.2.198:8090/ready": EOF (2 times)
Sep 09 07:56:06.855 I ns/e2e-projected-5514 pod/pod-projected-configmaps-bca2160f-6807-4678-87d2-7cf5a54cc1f7 node/ostest-5xqm8-worker-0-rzx47 container/projected-configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:56:07.212 I ns/e2e-projected-5514 pod/pod-projected-configmaps-bca2160f-6807-4678-87d2-7cf5a54cc1f7 node/ostest-5xqm8-worker-0-rzx47 container/projected-configmap-volume-test reason/Created
Sep 09 07:56:07.846 I ns/e2e-projected-5514 pod/pod-projected-configmaps-bca2160f-6807-4678-87d2-7cf5a54cc1f7 node/ostest-5xqm8-worker-0-rzx47 container/projected-configmap-volume-test reason/Started
Sep 09 07:56:10.487 W ns/e2e-projected-5514 pod/pod-projected-configmaps-bca2160f-6807-4678-87d2-7cf5a54cc1f7 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:56:12.367 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (12 times)
Sep 09 07:56:12.570 I ns/e2e-pods-7114 pod/pod-exec-websocket-df7f1cfe-3d93-4786-b624-c5610e7513a9 reason/AddedInterface Add eth0 [10.128.125.91/23]
Sep 09 07:56:12.669 W ns/e2e-projected-5514 pod/pod-projected-configmaps-bca2160f-6807-4678-87d2-7cf5a54cc1f7 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:56:12.710 W ns/test pod/demo node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Sep 09 07:56:13.170 I ns/e2e-pods-7114 pod/pod-exec-websocket-df7f1cfe-3d93-4786-b624-c5610e7513a9 node/ostest-5xqm8-worker-0-rzx47 container/main reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 07:56:13.448 I ns/e2e-pods-7114 pod/pod-exec-websocket-df7f1cfe-3d93-4786-b624-c5610e7513a9 node/ostest-5xqm8-worker-0-rzx47 container/main reason/Created
Sep 09 07:56:13.532 I ns/e2e-pods-7114 pod/pod-exec-websocket-df7f1cfe-3d93-4786-b624-c5610e7513a9 node/ostest-5xqm8-worker-0-rzx47 container/main reason/Started
Sep 09 07:56:13.686 I ns/e2e-pods-7114 pod/pod-exec-websocket-df7f1cfe-3d93-4786-b624-c5610e7513a9 node/ostest-5xqm8-worker-0-rzx47 container/main reason/Ready
Sep 09 07:56:16.334 I ns/e2e-services-1626 pod/externalsvc-5px76 node/ reason/Created
Sep 09 07:56:16.351 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (96 times)
Sep 09 07:56:16.420 I ns/e2e-services-1626 replicationcontroller/externalsvc reason/SuccessfulCreate Created pod: externalsvc-5px76
Sep 09 07:56:16.445 I ns/e2e-services-1626 pod/externalsvc-5px76 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 07:56:16.674 I ns/e2e-services-1626 pod/externalsvc-2qwfd node/ reason/Created
Sep 09 07:56:16.782 I ns/e2e-services-1626 replicationcontroller/externalsvc reason/SuccessfulCreate Created pod: externalsvc-2qwfd
Sep 09 07:56:16.901 I ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:56:16.901 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.196.2.198:8090/ready": EOF (3 times)
Sep 09 07:56:17.247 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ reason/Created
Sep 09 07:56:17.314 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 07:56:17.630 W ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_externalsvc-2qwfd_e2e-services-1626_9187df34-4e92-401a-a64a-0c53ba5aae76_0(08ef6b93458c4d3747287ab92adaded66895057b0a9f645661fafe223d0f59c6): [e2e-services-1626/externalsvc-2qwfd:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep 09 07:56:18.289 I ns/e2e-subpath-4773 pod/pod-subpath-test-secret-755m reason/AddedInterface Add eth0 [10.128.159.45/23]
Sep 09 07:56:18.968 - 89s   W ns/e2e-secrets-6176 pod/pod-secrets-5b292aeb-8295-44ba-9b81-ece5a128b4aa node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 07:56:18.968 - 89s   W ns/e2e-kubectl-9287 pod/agnhost-primary-9mvpm node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 07:56:19.086 I ns/e2e-subpath-4773 pod/pod-subpath-test-secret-755m node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-secret-755m reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:56:19.240 W ns/e2e-gc-3757 pod/simpletest.rc-d67kf node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 07:56:19.560 I ns/e2e-subpath-4773 pod/pod-subpath-test-secret-755m node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-secret-755m reason/Created
Sep 09 07:56:19.640 I ns/e2e-subpath-4773 pod/pod-subpath-test-secret-755m node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-secret-755m reason/Started
Sep 09 07:56:19.739 W ns/e2e-gc-3757 pod/simpletest.rc-fchhv node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:56:19.793 I ns/e2e-subpath-4773 pod/pod-subpath-test-secret-755m node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-secret-755m reason/Ready
Sep 09 07:56:20.388 W ns/e2e-gc-3757 pod/simpletest.rc-428zn node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:56:20.432 W ns/e2e-gc-3757 pod/simpletest.rc-d8dsd node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 07:56:20.454 I ns/e2e-secrets-4227 pod/pod-secrets-4944a30d-cad0-48cd-ae94-33601a72249f reason/AddedInterface Add eth0 [10.128.137.105/23]
Sep 09 07:56:20.483 W ns/e2e-gc-3757 pod/simpletest.rc-lgp2z node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:56:20.488 W ns/e2e-gc-3757 pod/simpletest.rc-h65xq node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:56:20.488 W ns/e2e-gc-3757 pod/simpletest.rc-5qq6m node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:56:20.506 W ns/e2e-gc-3757 pod/simpletest.rc-6h6cd node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:56:21.210 I ns/e2e-secrets-4227 pod/pod-secrets-4944a30d-cad0-48cd-ae94-33601a72249f node/ostest-5xqm8-worker-0-rzx47 container/secret-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:56:21.469 I ns/e2e-secrets-4227 pod/pod-secrets-4944a30d-cad0-48cd-ae94-33601a72249f node/ostest-5xqm8-worker-0-rzx47 container/secret-volume-test reason/Created
Sep 09 07:56:21.521 I ns/e2e-secrets-4227 pod/pod-secrets-4944a30d-cad0-48cd-ae94-33601a72249f node/ostest-5xqm8-worker-0-rzx47 container/secret-volume-test reason/Started
Sep 09 07:56:22.354 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: Get "http://10.196.2.198:8090/alive": EOF (3 times)
Sep 09 07:56:22.583 W ns/e2e-pods-7114 pod/pod-exec-websocket-df7f1cfe-3d93-4786-b624-c5610e7513a9 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 07:56:23.017 W ns/test pod/demo node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Sep 09 07:56:23.553 W ns/e2e-gc-3757 pod/simpletest.rc-phrqw node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 07:56:23.672 W ns/e2e-secrets-4227 pod/pod-secrets-4944a30d-cad0-48cd-ae94-33601a72249f node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:56:23.788 I ns/e2e-projected-8418 pod/pod-projected-secrets-a3703a33-33ce-4d42-9336-ea692f1c90e6 reason/AddedInterface Add eth0 [10.128.131.141/23]
Sep 09 07:56:24.560 I ns/e2e-projected-8418 pod/pod-projected-secrets-a3703a33-33ce-4d42-9336-ea692f1c90e6 node/ostest-5xqm8-worker-0-twrlr container/projected-secret-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:56:24.925 W ns/e2e-secrets-4227 pod/pod-secrets-4944a30d-cad0-48cd-ae94-33601a72249f node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:56:24.963 I ns/e2e-projected-8418 pod/pod-projected-secrets-a3703a33-33ce-4d42-9336-ea692f1c90e6 node/ostest-5xqm8-worker-0-twrlr container/projected-secret-volume-test reason/Created
Sep 09 07:56:25.600 I ns/e2e-projected-8418 pod/pod-projected-secrets-a3703a33-33ce-4d42-9336-ea692f1c90e6 node/ostest-5xqm8-worker-0-twrlr container/projected-secret-volume-test reason/Started
Sep 09 07:56:26.546 W ns/e2e-projected-8418 pod/pod-projected-secrets-a3703a33-33ce-4d42-9336-ea692f1c90e6 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 07:56:26.841 I ns/e2e-emptydir-9616 pod/pod-3fe23d75-3437-4caf-af64-98c5b5522e9e node/ reason/Created
Sep 09 07:56:26.844 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.196.2.198:8090/ready": EOF (4 times)
Sep 09 07:56:26.965 I ns/e2e-emptydir-9616 pod/pod-3fe23d75-3437-4caf-af64-98c5b5522e9e node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:56:29.763 W ns/e2e-projected-8418 pod/pod-projected-secrets-a3703a33-33ce-4d42-9336-ea692f1c90e6 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 07:56:31.685 I ns/e2e-projected-8255 pod/downwardapi-volume-cb0556d5-b283-4d32-8fd0-44bc8103ad17 node/ reason/Created
Sep 09 07:56:31.739 I ns/e2e-projected-8255 pod/downwardapi-volume-cb0556d5-b283-4d32-8fd0-44bc8103ad17 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:56:32.166 W ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_externalsvc-2qwfd_e2e-services-1626_9187df34-4e92-401a-a64a-0c53ba5aae76_0(f21e4fadf72ce68ff0d5d6e0e7f73adc78a2bbde7ce12f7af18a96094da41879): [e2e-services-1626/externalsvc-2qwfd:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep 09 07:56:32.354 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: Get "http://10.196.2.198:8090/alive": EOF (4 times)
Sep 09 07:56:33.146 W ns/test pod/demo node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Sep 09 07:56:33.968 - 74s   W ns/e2e-pod-network-test-9567 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 07:56:36.846 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.196.2.198:8090/ready": EOF (5 times)
Sep 09 07:56:40.185 W ns/e2e-gc-3757 pod/simpletest.rc-5xjds node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:56:40.812 W ns/e2e-subpath-4773 pod/pod-subpath-test-secret-755m node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:56:42.386 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: Get "http://10.196.2.198:8090/alive": EOF (5 times)
Sep 09 07:56:42.655 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc reason/AddedInterface Add eth0 [10.128.140.241/23]
Sep 09 07:56:42.672 W ns/e2e-subpath-4773 pod/pod-subpath-test-secret-755m node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:56:42.725 I ns/e2e-container-runtime-6603 pod/termination-message-containerc9850cba-7926-4ac9-b8dc-17f264c99ba0 node/ reason/Created
Sep 09 07:56:42.762 I ns/e2e-container-runtime-6603 pod/termination-message-containerc9850cba-7926-4ac9-b8dc-17f264c99ba0 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:56:43.351 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:56:43.757 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Created
Sep 09 07:56:43.791 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Started
Sep 09 07:56:43.801 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/querier reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:56:44.029 I ns/e2e-proxy-2004 pod/proxy-service-4kqss-9gmw4 node/ reason/Created
Sep 09 07:56:44.062 I ns/e2e-proxy-2004 replicationcontroller/proxy-service-4kqss reason/SuccessfulCreate Created pod: proxy-service-4kqss-9gmw4
Sep 09 07:56:44.075 I ns/e2e-proxy-2004 pod/proxy-service-4kqss-9gmw4 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:56:44.144 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/querier reason/Created
Sep 09 07:56:44.206 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/querier reason/Started
Sep 09 07:56:44.218 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/Pulling image/gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0
Sep 09 07:56:46.108 W ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_externalsvc-2qwfd_e2e-services-1626_9187df34-4e92-401a-a64a-0c53ba5aae76_0(dbcfe43a3a0d6d59c774916261ba5451104d35fdc56b54855b476438a7e1d1a7): [e2e-services-1626/externalsvc-2qwfd:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep 09 07:56:46.841 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.196.2.198:8090/ready": EOF (6 times)
Sep 09 07:56:47.037 I ns/e2e-services-1626 pod/externalsvc-5px76 reason/AddedInterface Add eth0 [10.128.138.25/23]
Sep 09 07:56:47.800 I ns/e2e-services-1626 pod/externalsvc-5px76 node/ostest-5xqm8-worker-0-twrlr container/externalsvc reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:56:48.134 I ns/e2e-services-1626 pod/externalsvc-5px76 node/ostest-5xqm8-worker-0-twrlr container/externalsvc reason/Created
Sep 09 07:56:48.252 I ns/e2e-services-1626 pod/externalsvc-5px76 node/ostest-5xqm8-worker-0-twrlr container/externalsvc reason/Started
Sep 09 07:56:48.340 I ns/e2e-services-1626 pod/externalsvc-5px76 node/ostest-5xqm8-worker-0-twrlr container/externalsvc reason/Ready
Sep 09 07:56:48.850 W ns/test pod/demo node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_demo_test_9a107182-fbd1-4d75-bdf8-e7f8940fe166_0(55f8d0cf1429f92f18dd2734c90b92236931b9b387092f67bc22175ddc17c8f1): netplugin failed with no error message: context deadline exceeded
Sep 09 07:56:48.968 - 59s   W ns/e2e-secrets-4837 pod/pod-secrets-60b4b229-8a34-42ec-802f-70906f2fc0bd node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 07:56:52.363 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (13 times)
Sep 09 07:56:53.903 W ns/e2e-pods-7114 pod/pod-exec-websocket-df7f1cfe-3d93-4786-b624-c5610e7513a9 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:56:53.903 W ns/e2e-pods-7114 pod/pod-exec-websocket-df7f1cfe-3d93-4786-b624-c5610e7513a9 node/ostest-5xqm8-worker-0-rzx47 container/main reason/NotReady
Sep 09 07:56:54.940 I ns/e2e-emptydir-9616 pod/pod-3fe23d75-3437-4caf-af64-98c5b5522e9e reason/AddedInterface Add eth0 [10.128.119.241/23]
Sep 09 07:56:55.602 I ns/e2e-emptydir-9616 pod/pod-3fe23d75-3437-4caf-af64-98c5b5522e9e node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:56:55.873 I ns/e2e-emptydir-9616 pod/pod-3fe23d75-3437-4caf-af64-98c5b5522e9e node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 07:56:56.480 I ns/e2e-emptydir-9616 pod/pod-3fe23d75-3437-4caf-af64-98c5b5522e9e node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 07:56:57.115 W ns/e2e-emptydir-9616 pod/pod-3fe23d75-3437-4caf-af64-98c5b5522e9e node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:56:57.225 W ns/e2e-pods-7114 pod/pod-exec-websocket-df7f1cfe-3d93-4786-b624-c5610e7513a9 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:56:58.828 W ns/e2e-emptydir-9616 pod/pod-3fe23d75-3437-4caf-af64-98c5b5522e9e node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:56:58.858 I ns/e2e-projected-8255 pod/downwardapi-volume-cb0556d5-b283-4d32-8fd0-44bc8103ad17 reason/AddedInterface Add eth0 [10.128.120.76/23]
Sep 09 07:56:59.574 I ns/e2e-projected-8255 pod/downwardapi-volume-cb0556d5-b283-4d32-8fd0-44bc8103ad17 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:56:59.835 I ns/e2e-projected-8255 pod/downwardapi-volume-cb0556d5-b283-4d32-8fd0-44bc8103ad17 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Created
Sep 09 07:56:59.918 I ns/e2e-projected-8255 pod/downwardapi-volume-cb0556d5-b283-4d32-8fd0-44bc8103ad17 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Started
Sep 09 07:57:00.054 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/Pulled image/gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0
Sep 09 07:57:00.059 I ns/e2e-projected-8255 pod/downwardapi-volume-cb0556d5-b283-4d32-8fd0-44bc8103ad17 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Ready
Sep 09 07:57:00.402 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/Created
Sep 09 07:57:00.530 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/Started
Sep 09 07:57:01.172 I ns/e2e-webhook-2955 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 07:57:01.237 I ns/e2e-webhook-2955 pod/sample-webhook-deployment-7bc8486f8c-ps8th node/ reason/Created
Sep 09 07:57:01.287 I ns/e2e-webhook-2955 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-ps8th
Sep 09 07:57:01.349 I ns/e2e-webhook-2955 pod/sample-webhook-deployment-7bc8486f8c-ps8th node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:57:01.621 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/Ready
Sep 09 07:57:01.621 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Ready
Sep 09 07:57:01.621 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/querier reason/Ready
Sep 09 07:57:01.972 W ns/e2e-projected-8255 pod/downwardapi-volume-cb0556d5-b283-4d32-8fd0-44bc8103ad17 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:57:02.349 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: Get "http://10.196.2.198:8090/alive": EOF (6 times)
Sep 09 07:57:03.357 W ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 07:57:03.449 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ reason/Created
Sep 09 07:57:03.561 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:57:03.596 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/Killing
Sep 09 07:57:03.672 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/querier reason/Killing
Sep 09 07:57:03.695 I ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Killing
Sep 09 07:57:06.843 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.196.2.198:8090/ready": EOF (7 times)
Sep 09 07:57:06.845 W ns/e2e-projected-8255 pod/downwardapi-volume-cb0556d5-b283-4d32-8fd0-44bc8103ad17 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:57:07.291 W ns/e2e-dns-6170 pod/dns-test-993f08ec-cc17-448e-a90f-9f6cd6a1aecc node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 07:57:09.056 I ns/e2e-configmap-7096 pod/pod-configmaps-cd7d9d2d-de6a-49bb-b638-ef92b5c84e5f node/ reason/Created
Sep 09 07:57:09.097 I ns/e2e-configmap-7096 pod/pod-configmaps-cd7d9d2d-de6a-49bb-b638-ef92b5c84e5f node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:57:09.790 I ns/e2e-container-runtime-6603 pod/termination-message-containerc9850cba-7926-4ac9-b8dc-17f264c99ba0 reason/AddedInterface Add eth0 [10.128.128.237/23]
Sep 09 07:57:09.846 W ns/e2e-configmap-7096 pod/pod-configmaps-cd7d9d2d-de6a-49bb-b638-ef92b5c84e5f node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-configmaps-cd7d9d2d-de6a-49bb-b638-ef92b5c84e5f_e2e-configmap-7096_6969d44a-2cb3-4924-bf62-44a347456dab_0(14512f12b6b91a8e210bb075cd0b0815138c96b80e6e709ce20c63e459e82fb6): [e2e-configmap-7096/pod-configmaps-cd7d9d2d-de6a-49bb-b638-ef92b5c84e5f:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep 09 07:57:10.437 I ns/e2e-container-runtime-6603 pod/termination-message-containerc9850cba-7926-4ac9-b8dc-17f264c99ba0 node/ostest-5xqm8-worker-0-rzx47 container/termination-message-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 07:57:10.714 I ns/e2e-container-runtime-6603 pod/termination-message-containerc9850cba-7926-4ac9-b8dc-17f264c99ba0 node/ostest-5xqm8-worker-0-rzx47 container/termination-message-container reason/Created
Sep 09 07:57:10.848 I ns/e2e-container-runtime-6603 pod/termination-message-containerc9850cba-7926-4ac9-b8dc-17f264c99ba0 node/ostest-5xqm8-worker-0-rzx47 container/termination-message-container reason/Started
Sep 09 07:57:11.217 W ns/e2e-container-runtime-6603 pod/termination-message-containerc9850cba-7926-4ac9-b8dc-17f264c99ba0 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:57:12.260 I ns/e2e-subpath-6101 pod/pod-subpath-test-projected-7hxz node/ reason/Created
Sep 09 07:57:12.309 I ns/e2e-subpath-6101 pod/pod-subpath-test-projected-7hxz node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:57:12.348 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: Get "http://10.196.2.198:8090/alive": EOF (7 times)
Sep 09 07:57:12.374 I ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Killing
Sep 09 07:57:14.637 W ns/e2e-container-runtime-6603 pod/termination-message-containerc9850cba-7926-4ac9-b8dc-17f264c99ba0 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:57:16.352 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (97 times)
Sep 09 07:57:18.969 - 29s   W ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 07:57:25.730 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 reason/AddedInterface Add eth0 [10.128.141.93/23]
Sep 09 07:57:26.433 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:57:26.705 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Created
Sep 09 07:57:26.748 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Started
Sep 09 07:57:26.766 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/querier reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:57:27.043 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/querier reason/Created
Sep 09 07:57:27.087 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/querier reason/Started
Sep 09 07:57:27.123 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier reason/Pulling image/gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0
Sep 09 07:57:35.220 I ns/e2e-proxy-2004 pod/proxy-service-4kqss-9gmw4 reason/AddedInterface Add eth0 [10.128.142.121/23]
Sep 09 07:57:36.556 I ns/e2e-webhook-2955 pod/sample-webhook-deployment-7bc8486f8c-ps8th reason/AddedInterface Add eth0 [10.128.135.102/23]
Sep 09 07:57:41.419 I ns/e2e-proxy-2004 pod/proxy-service-4kqss-9gmw4 node/ostest-5xqm8-worker-0-rzx47 container/proxy-service-4kqss reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:57:41.428 I ns/e2e-webhook-2955 pod/sample-webhook-deployment-7bc8486f8c-ps8th node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:57:41.475 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier reason/Pulled image/gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0
Sep 09 07:57:41.830 I ns/e2e-proxy-2004 pod/proxy-service-4kqss-9gmw4 node/ostest-5xqm8-worker-0-rzx47 container/proxy-service-4kqss reason/Created
Sep 09 07:57:41.864 I ns/e2e-webhook-2955 pod/sample-webhook-deployment-7bc8486f8c-ps8th node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Created
Sep 09 07:57:41.879 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier reason/Created
Sep 09 07:57:41.906 I ns/e2e-proxy-2004 pod/proxy-service-4kqss-9gmw4 node/ostest-5xqm8-worker-0-rzx47 container/proxy-service-4kqss reason/Started
Sep 09 07:57:41.950 I ns/e2e-webhook-2955 pod/sample-webhook-deployment-7bc8486f8c-ps8th node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Started
Sep 09 07:57:41.971 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier reason/Started
Sep 09 07:57:42.332 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier reason/Ready
Sep 09 07:57:42.332 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/querier reason/Ready
Sep 09 07:57:42.332 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 07:57:42.595 W ns/e2e-secrets-6176 pod/pod-secrets-5b292aeb-8295-44ba-9b81-ece5a128b4aa node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-secrets-5b292aeb-8295-44ba-9b81-ece5a128b4aa_e2e-secrets-6176_abaadd3d-5396-4d55-990a-21dd8ce59cdc_0(65943ffc4789c742d32bf4788e07eec2175fb3da1d3453667499bfc81f5cc5e7): [e2e-secrets-6176/pod-secrets-5b292aeb-8295-44ba-9b81-ece5a128b4aa:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 07:57:43.007 I ns/e2e-webhook-2955 pod/sample-webhook-deployment-7bc8486f8c-ps8th node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Ready
Sep 09 07:57:43.406 W ns/e2e-pod-network-test-9567 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_test-container-pod_e2e-pod-network-test-9567_68fd7361-323c-434d-bdf1-ca0049c9e6cb_0(181be553a41f48b650b6b11b344a5902e7c4758079e9a6ba2de8bdde49f17b28): [e2e-pod-network-test-9567/test-container-pod:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 07:57:43.474 W ns/e2e-secrets-4837 pod/pod-secrets-60b4b229-8a34-42ec-802f-70906f2fc0bd node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-secrets-60b4b229-8a34-42ec-802f-70906f2fc0bd_e2e-secrets-4837_b673c647-57fc-484b-a239-dc1011218ce8_0(fcb09cf66bc3e324f35cb71e3b9922c40ac972396dd1a31b40c72a471c892975): netplugin failed: "2020/09/09 07:55:39 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-secrets-4837;K8S_POD_NAME=pod-secrets-60b4b229-8a34-42ec-802f-70906f2fc0bd;K8S_POD_INFRA_CONTAINER_ID=fcb09cf66bc3e324f35cb71e3b9922c40ac972396dd1a31b40c72a471c892975, CNI_NETNS=/var/run/netns/e59e96ad-c486-4f25-b09c-a30af5e58d58).\n2020-09-09T07:56:07Z [verbose] Del: e2e-secrets-4837:pod-secrets-60b4b229-8a34-42ec-802f-70906f2fc0bd:unknownUID:kuryr:eth0 {\"cniVersion\":\"0.3.1\",\"debug\":true,\"kuryr_conf\":\"/etc/kuryr/kuryr.conf\",\"name\":\"kuryr\",\"type\":\"kuryr-cni\"}\n2020/09/09 07:56:07 Calling kuryr-daemon with DEL request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-secrets-4837;K8S_POD_NAME=pod-secrets-60b4b229-8a34-42ec-802f-70906f2fc0bd;K8S_POD_INFRA_CONTAINER_ID=fcb09cf66bc3e324f35cb71e3b9922c40ac972396dd1a31b40c72a471c892975, CNI_NETNS=/var/run/netns/e59e96ad-c486-4f25-b09c-a30af5e58d58).\n"
Sep 09 07:57:43.563 W ns/e2e-kubectl-9287 pod/agnhost-primary-9mvpm node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_agnhost-primary-9mvpm_e2e-kubectl-9287_acbdd10f-9dd9-41b7-9a41-22bc1295b473_0(ad2fba29f0aeb73eb4aade727cd07dfd42b92b720d2ec357272463075f9b4448): netplugin failed: "2020/09/09 07:55:17 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-kubectl-9287;K8S_POD_NAME=agnhost-primary-9mvpm;K8S_POD_INFRA_CONTAINER_ID=ad2fba29f0aeb73eb4aade727cd07dfd42b92b720d2ec357272463075f9b4448, CNI_NETNS=/var/run/netns/5563f5d3-60d0-4819-a7f1-e7c455d59bc1).\n2020-09-09T07:55:53Z [verbose] Del: e2e-kubectl-9287:agnhost-primary-9mvpm:unknownUID:kuryr:eth0 {\"cniVersion\":\"0.3.1\",\"debug\":true,\"kuryr_conf\":\"/etc/kuryr/kuryr.conf\",\"name\":\"kuryr\",\"type\":\"kuryr-cni\"}\n2020/09/09 07:55:53 Calling kuryr-daemon with DEL request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-kubectl-9287;K8S_POD_NAME=agnhost-primary-9mvpm;K8S_POD_INFRA_CONTAINER_ID=ad2fba29f0aeb73eb4aade727cd07dfd42b92b720d2ec357272463075f9b4448, CNI_NETNS=/var/run/netns/5563f5d3-60d0-4819-a7f1-e7c455d59bc1).\n"
Sep 09 07:57:43.626 W ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:57:43.741 I ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f5e70df5263a2a421c4e819534a8ed890a331a1306c851cfe749af494180126
Sep 09 07:57:43.770 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ reason/Created
Sep 09 07:57:43.824 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 07:57:44.051 W ns/e2e-pods-7111 pod/pod-hostip-781ce8f4-2edb-4890-9396-483d19927951 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:57:44.051 W ns/e2e-pods-7111 pod/pod-hostip-781ce8f4-2edb-4890-9396-483d19927951 node/ostest-5xqm8-worker-0-cbbx9 container/test reason/NotReady
Sep 09 07:57:44.086 W ns/e2e-configmap-7096 pod/pod-configmaps-cd7d9d2d-de6a-49bb-b638-ef92b5c84e5f node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-configmaps-cd7d9d2d-de6a-49bb-b638-ef92b5c84e5f_e2e-configmap-7096_6969d44a-2cb3-4924-bf62-44a347456dab_0(446c3391bbb2ccc6a7e2d0bbaf8dfadbce5d06482a268ba16ef71680c651857c): [e2e-configmap-7096/pod-configmaps-cd7d9d2d-de6a-49bb-b638-ef92b5c84e5f:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep 09 07:57:44.124 W ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_externalsvc-2qwfd_e2e-services-1626_9187df34-4e92-401a-a64a-0c53ba5aae76_0(6dd4e2f75f41684b18a42f38a9e03aae71bb61459e07b29fe481d0d7fc01910b): [e2e-services-1626/externalsvc-2qwfd:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep 09 07:57:44.159 I ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Created
Sep 09 07:57:44.241 I ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Started
Sep 09 07:57:44.340 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/querier reason/Killing
Sep 09 07:57:44.355 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Killing
Sep 09 07:57:44.367 I ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier reason/Killing
Sep 09 07:57:45.089 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/NotReady
Sep 09 07:57:45.089 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Restarted
Sep 09 07:57:45.558 W clusteroperator/network changed Progressing to True: Deploying: DaemonSet "openshift-kuryr/kuryr-cni" is not available (awaiting 1 nodes)
Sep 09 07:57:46.131 W ns/e2e-webhook-2955 pod/sample-webhook-deployment-7bc8486f8c-ps8th node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:57:46.920 W ns/e2e-pods-7111 pod/pod-hostip-781ce8f4-2edb-4890-9396-483d19927951 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:57:47.396 E ns/e2e-webhook-2955 pod/sample-webhook-deployment-7bc8486f8c-ps8th node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook container exited with code 2 (Error): 
Sep 09 07:57:47.445 E ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/querier container exited with code 137 (Error): 
Sep 09 07:57:47.445 E ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier container exited with code 137 (Error): 
Sep 09 07:57:47.445 E ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 container/webserver container exited with code 2 (Error): 
Sep 09 07:57:49.704 W ns/e2e-dns-6170 pod/dns-test-684002ab-b222-4eae-89c7-3062d9521878 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:57:49.705 W ns/e2e-webhook-2955 pod/sample-webhook-deployment-7bc8486f8c-ps8th node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:57:50.628 I ns/e2e-webhook-1376 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 07:57:50.673 I ns/e2e-webhook-1376 pod/sample-webhook-deployment-7bc8486f8c-gxxmb node/ reason/Created
Sep 09 07:57:50.758 I ns/e2e-webhook-1376 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-gxxmb
Sep 09 07:57:50.774 I ns/e2e-webhook-1376 pod/sample-webhook-deployment-7bc8486f8c-gxxmb node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:57:51.039 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 reason/AddedInterface Add eth0 [10.128.140.241/23]
Sep 09 07:57:51.203 I ns/e2e-proxy-2004 pod/proxy-service-4kqss-9gmw4 node/ostest-5xqm8-worker-0-rzx47 container/proxy-service-4kqss reason/Ready
Sep 09 07:57:51.728 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:57:52.013 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Created
Sep 09 07:57:52.079 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Started
Sep 09 07:57:52.122 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr container/querier reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:57:52.496 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr container/querier reason/Created
Sep 09 07:57:52.575 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr container/querier reason/Started
Sep 09 07:57:52.614 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/Pulled image/gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0
Sep 09 07:57:52.866 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/Created
Sep 09 07:57:52.943 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/Started
Sep 09 07:57:53.001 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/Ready
Sep 09 07:57:53.001 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr container/querier reason/Ready
Sep 09 07:57:53.001 I ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Ready
Sep 09 07:57:53.596 W ns/e2e-proxy-2004 pod/proxy-service-4kqss-9gmw4 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 1s
Sep 09 07:57:53.630 I ns/e2e-proxy-2004 pod/proxy-service-4kqss-9gmw4 node/ostest-5xqm8-worker-0-rzx47 container/proxy-service-4kqss reason/Killing
Sep 09 07:57:54.048 W ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 07:57:55.526 I ns/e2e-subpath-6101 pod/pod-subpath-test-projected-7hxz reason/AddedInterface Add eth0 [10.128.153.222/23]
Sep 09 07:57:55.560 W ns/e2e-proxy-2004 pod/proxy-service-4kqss-9gmw4 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:57:55.560 W ns/e2e-proxy-2004 pod/proxy-service-4kqss-9gmw4 node/ostest-5xqm8-worker-0-rzx47 container/proxy-service-4kqss reason/NotReady
Sep 09 07:57:55.690 I ns/e2e-secrets-4837 pod/pod-secrets-60b4b229-8a34-42ec-802f-70906f2fc0bd reason/AddedInterface Add eth0 [10.128.157.114/23]
Sep 09 07:57:56.103 I ns/e2e-container-runtime-5729 pod/termination-message-containereeedcce5-c847-47c1-8fd1-5c73aeb18ad3 node/ reason/Created
Sep 09 07:57:56.194 I ns/e2e-container-runtime-5729 pod/termination-message-containereeedcce5-c847-47c1-8fd1-5c73aeb18ad3 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:57:56.211 W ns/e2e-dns-6170 pod/dns-test-fc5d3434-a001-43c2-998d-1608f59a7aa3 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 07:57:56.248 I ns/e2e-subpath-6101 pod/pod-subpath-test-projected-7hxz node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-projected-7hxz reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:57:56.406 I ns/e2e-secrets-4837 pod/pod-secrets-60b4b229-8a34-42ec-802f-70906f2fc0bd node/ostest-5xqm8-worker-0-cbbx9 container/secret-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:57:56.705 I ns/e2e-subpath-6101 pod/pod-subpath-test-projected-7hxz node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-projected-7hxz reason/Created
Sep 09 07:57:56.756 I ns/e2e-subpath-6101 pod/pod-subpath-test-projected-7hxz node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-projected-7hxz reason/Started
Sep 09 07:57:56.771 I ns/e2e-secrets-4837 pod/pod-secrets-60b4b229-8a34-42ec-802f-70906f2fc0bd node/ostest-5xqm8-worker-0-cbbx9 container/secret-volume-test reason/Created
Sep 09 07:57:56.951 I ns/e2e-secrets-4837 pod/pod-secrets-60b4b229-8a34-42ec-802f-70906f2fc0bd node/ostest-5xqm8-worker-0-cbbx9 container/secret-volume-test reason/Started
Sep 09 07:57:57.456 I ns/e2e-subpath-6101 pod/pod-subpath-test-projected-7hxz node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-projected-7hxz reason/Ready
Sep 09 07:57:57.761 I ns/e2e-pod-network-test-9567 pod/test-container-pod reason/AddedInterface Add eth0 [10.128.133.76/23]
Sep 09 07:57:58.285 W ns/e2e-secrets-4837 pod/pod-secrets-60b4b229-8a34-42ec-802f-70906f2fc0bd node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:57:58.560 I ns/e2e-pod-network-test-9567 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:57:58.980 I ns/e2e-configmap-7096 pod/pod-configmaps-cd7d9d2d-de6a-49bb-b638-ef92b5c84e5f reason/AddedInterface Add eth0 [10.128.124.225/23]
Sep 09 07:57:59.004 I ns/e2e-pod-network-test-9567 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 07:57:59.121 I ns/e2e-secrets-6176 pod/pod-secrets-5b292aeb-8295-44ba-9b81-ece5a128b4aa reason/AddedInterface Add eth0 [10.128.123.128/23]
Sep 09 07:57:59.137 I ns/e2e-pod-network-test-9567 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 07:57:59.181 I ns/e2e-kubectl-9287 pod/agnhost-primary-9mvpm reason/AddedInterface Add eth0 [10.128.149.173/23]
Sep 09 07:57:59.851 I ns/e2e-configmap-7096 pod/pod-configmaps-cd7d9d2d-de6a-49bb-b638-ef92b5c84e5f node/ostest-5xqm8-worker-0-cbbx9 container/configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:57:59.905 I ns/e2e-secrets-6176 pod/pod-secrets-5b292aeb-8295-44ba-9b81-ece5a128b4aa node/ostest-5xqm8-worker-0-cbbx9 container/secret-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:58:00.134 I ns/e2e-kubectl-9287 pod/agnhost-primary-9mvpm node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-primary reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:58:00.176 I ns/e2e-pod-network-test-9567 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 07:58:00.407 I ns/e2e-configmap-7096 pod/pod-configmaps-cd7d9d2d-de6a-49bb-b638-ef92b5c84e5f node/ostest-5xqm8-worker-0-cbbx9 container/configmap-volume-test reason/Created
Sep 09 07:58:00.428 I ns/e2e-secrets-6176 pod/pod-secrets-5b292aeb-8295-44ba-9b81-ece5a128b4aa node/ostest-5xqm8-worker-0-cbbx9 container/secret-volume-test reason/Created
Sep 09 07:58:00.507 I ns/e2e-configmap-7096 pod/pod-configmaps-cd7d9d2d-de6a-49bb-b638-ef92b5c84e5f node/ostest-5xqm8-worker-0-cbbx9 container/configmap-volume-test reason/Started
Sep 09 07:58:00.561 I ns/e2e-kubectl-9287 pod/agnhost-primary-9mvpm node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-primary reason/Created
Sep 09 07:58:00.640 I ns/e2e-kubectl-9287 pod/agnhost-primary-9mvpm node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-primary reason/Started
Sep 09 07:58:00.849 I ns/e2e-services-1626 pod/externalsvc-2qwfd reason/AddedInterface Add eth0 [10.128.138.46/23]
Sep 09 07:58:01.069 I ns/e2e-secrets-6176 pod/pod-secrets-5b292aeb-8295-44ba-9b81-ece5a128b4aa node/ostest-5xqm8-worker-0-cbbx9 container/secret-volume-test reason/Started
Sep 09 07:58:01.223 I ns/e2e-kubectl-9287 pod/agnhost-primary-9mvpm node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-primary reason/Ready
Sep 09 07:58:01.704 I ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 container/externalsvc reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:58:01.890 W ns/e2e-configmap-7096 pod/pod-configmaps-cd7d9d2d-de6a-49bb-b638-ef92b5c84e5f node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:58:02.182 W ns/e2e-secrets-4837 pod/pod-secrets-60b4b229-8a34-42ec-802f-70906f2fc0bd node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:58:02.274 W ns/e2e-secrets-6176 pod/pod-secrets-5b292aeb-8295-44ba-9b81-ece5a128b4aa node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:58:02.324 I ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 container/externalsvc reason/Created
Sep 09 07:58:02.386 I ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 container/externalsvc reason/Started
Sep 09 07:58:03.263 I ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 container/externalsvc reason/Ready
Sep 09 07:58:03.399 W ns/e2e-configmap-7096 pod/pod-configmaps-cd7d9d2d-de6a-49bb-b638-ef92b5c84e5f node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:58:03.757 W ns/e2e-secrets-6176 pod/pod-secrets-5b292aeb-8295-44ba-9b81-ece5a128b4aa node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:58:03.968 W ns/e2e-proxy-2004 pod/proxy-service-4kqss-9gmw4 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 07:58:04.409 I ns/e2e-services-1626 pod/execpodcmtkx node/ reason/Created
Sep 09 07:58:04.542 I ns/e2e-services-1626 pod/execpodcmtkx node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:58:04.709 I ns/e2e-events-3640 / reason/Test This is test-event-1
Sep 09 07:58:04.735 I ns/e2e-events-3640 / reason/Test This is test-event-2
Sep 09 07:58:04.762 I ns/e2e-events-3640 / reason/Test This is test-event-3
Sep 09 07:58:05.180 I ns/e2e-webhook-4105 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 07:58:05.328 I ns/e2e-webhook-4105 pod/sample-webhook-deployment-7bc8486f8c-5qfcl node/ reason/Created
Sep 09 07:58:05.374 I ns/e2e-webhook-4105 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-5qfcl
Sep 09 07:58:05.439 I ns/e2e-webhook-4105 pod/sample-webhook-deployment-7bc8486f8c-5qfcl node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:58:06.015 I ns/e2e-aggregator-2390 deployment/sample-apiserver-deployment reason/ScalingReplicaSet Scaled up replica set sample-apiserver-deployment-67c46cd746 to 1
Sep 09 07:58:06.115 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ reason/Created
Sep 09 07:58:06.210 I ns/e2e-aggregator-2390 replicaset/sample-apiserver-deployment-67c46cd746 reason/SuccessfulCreate Created pod: sample-apiserver-deployment-67c46cd746-r4dm6
Sep 09 07:58:06.330 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:58:06.482 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ reason/Created
Sep 09 07:58:06.585 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:58:06.955 I ns/e2e-events-2475 / reason/Test This is test-event-1
Sep 09 07:58:06.974 I ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Ready
Sep 09 07:58:06.974 I ns/e2e-events-2475 / reason/Test This is test-event-2
Sep 09 07:58:07.076 I ns/e2e-events-2475 / reason/Test This is test-event-3
Sep 09 07:58:07.399 W ns/e2e-proxy-2004 pod/proxy-service-4kqss-9gmw4 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:58:07.911 W clusteroperator/network changed Progressing to False
Sep 09 07:58:08.805 I ns/e2e-projected-6756 pod/annotationupdated7b7c3b6-f2e6-4adb-b11e-17e09a829ace node/ reason/Created
Sep 09 07:58:08.874 I ns/e2e-projected-6756 pod/annotationupdated7b7c3b6-f2e6-4adb-b11e-17e09a829ace node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:58:08.930 I ns/e2e-webhook-8416 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 07:58:08.956 I ns/e2e-webhook-8416 pod/sample-webhook-deployment-7bc8486f8c-24s6z node/ reason/Created
Sep 09 07:58:09.005 I ns/e2e-webhook-8416 pod/sample-webhook-deployment-7bc8486f8c-24s6z node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:58:09.025 I ns/e2e-webhook-8416 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-24s6z
Sep 09 07:58:09.099 I ns/e2e-container-lifecycle-hook-5163 pod/pod-handle-http-request node/ reason/Created
Sep 09 07:58:09.160 I ns/e2e-container-lifecycle-hook-5163 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:58:12.528 W ns/e2e-pod-network-test-9567 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 07:58:12.566 W ns/e2e-kubectl-9287 pod/agnhost-primary-9mvpm node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 07:58:12.580 W ns/e2e-pod-network-test-9567 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 07:58:12.624 W ns/e2e-pod-network-test-9567 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 07:58:12.656 W ns/e2e-pod-network-test-9567 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 07:58:13.569 E ns/e2e-pod-network-test-9567 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver container exited with code 2 (Error): 
Sep 09 07:58:13.740 I ns/e2e-services-1626 pod/execpodcmtkx reason/AddedInterface Add eth0 [10.128.138.114/23]
Sep 09 07:58:14.271 W ns/e2e-pod-network-test-9567 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:58:14.271 W ns/e2e-pod-network-test-9567 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/NotReady
Sep 09 07:58:14.355 W ns/e2e-pod-network-test-9567 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:58:14.355 W ns/e2e-pod-network-test-9567 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/NotReady
Sep 09 07:58:14.420 W ns/e2e-pod-network-test-9567 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:58:14.420 W ns/e2e-pod-network-test-9567 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/NotReady
Sep 09 07:58:14.531 W ns/e2e-kubectl-9287 pod/agnhost-primary-9mvpm node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:58:14.531 W ns/e2e-kubectl-9287 pod/agnhost-primary-9mvpm node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-primary reason/NotReady
Sep 09 07:58:14.533 I ns/e2e-services-1626 pod/execpodcmtkx node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:58:14.874 I ns/e2e-services-1626 pod/execpodcmtkx node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/Created
Sep 09 07:58:14.923 I ns/e2e-services-1626 pod/execpodcmtkx node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/Started
Sep 09 07:58:15.387 W ns/e2e-pod-network-test-9567 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 07:58:15.570 W ns/e2e-kubectl-9287 pod/agnhost-primary-9mvpm node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:58:15.668 I ns/e2e-services-1626 pod/execpodcmtkx node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/Ready
Sep 09 07:58:16.795 W ns/e2e-pod-network-test-9567 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:58:17.162 W ns/e2e-services-1626 pod/externalsvc-5px76 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 1s
Sep 09 07:58:17.186 W ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 1s
Sep 09 07:58:17.214 I ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 container/externalsvc reason/Killing
Sep 09 07:58:17.214 I ns/e2e-services-1626 pod/externalsvc-5px76 node/ostest-5xqm8-worker-0-twrlr container/externalsvc reason/Killing
Sep 09 07:58:17.255 W ns/e2e-pod-network-test-9567 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:58:17.255 W ns/e2e-pod-network-test-9567 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:58:17.916 I ns/e2e-webhook-1376 pod/sample-webhook-deployment-7bc8486f8c-gxxmb reason/AddedInterface Add eth0 [10.128.118.81/23]
Sep 09 07:58:18.669 I ns/e2e-webhook-1376 pod/sample-webhook-deployment-7bc8486f8c-gxxmb node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:58:18.941 I ns/e2e-webhook-1376 pod/sample-webhook-deployment-7bc8486f8c-gxxmb node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Created
Sep 09 07:58:19.000 I ns/e2e-webhook-1376 pod/sample-webhook-deployment-7bc8486f8c-gxxmb node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Started
Sep 09 07:58:19.795 W ns/e2e-subpath-6101 pod/pod-subpath-test-projected-7hxz node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:58:20.252 E ns/e2e-services-1626 pod/externalsvc-5px76 node/ostest-5xqm8-worker-0-twrlr container/externalsvc container exited with code 137 (Error): 
Sep 09 07:58:20.299 I ns/e2e-webhook-1376 pod/sample-webhook-deployment-7bc8486f8c-gxxmb node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Ready
Sep 09 07:58:20.368 E ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 container/externalsvc container exited with code 137 (Error): 
Sep 09 07:58:21.389 W ns/e2e-services-1626 pod/externalsvc-2qwfd node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:58:23.300 I ns/e2e-container-runtime-5729 pod/termination-message-containereeedcce5-c847-47c1-8fd1-5c73aeb18ad3 reason/AddedInterface Add eth0 [10.128.127.220/23]
Sep 09 07:58:23.445 W ns/e2e-subpath-6101 pod/pod-subpath-test-projected-7hxz node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:58:24.018 I ns/e2e-container-runtime-5729 pod/termination-message-containereeedcce5-c847-47c1-8fd1-5c73aeb18ad3 node/ostest-5xqm8-worker-0-rzx47 container/termination-message-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 07:58:24.271 I ns/e2e-container-runtime-5729 pod/termination-message-containereeedcce5-c847-47c1-8fd1-5c73aeb18ad3 node/ostest-5xqm8-worker-0-rzx47 container/termination-message-container reason/Created
Sep 09 07:58:24.399 I ns/e2e-container-runtime-5729 pod/termination-message-containereeedcce5-c847-47c1-8fd1-5c73aeb18ad3 node/ostest-5xqm8-worker-0-rzx47 container/termination-message-container reason/Started
Sep 09 07:58:24.597 E ns/e2e-container-runtime-5729 pod/termination-message-containereeedcce5-c847-47c1-8fd1-5c73aeb18ad3 node/ostest-5xqm8-worker-0-rzx47 container/termination-message-container init container exited with code 1 (Error): DONE
Sep 09 07:58:24.597 E ns/e2e-container-runtime-5729 pod/termination-message-containereeedcce5-c847-47c1-8fd1-5c73aeb18ad3 node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 07:58:24.597 E ns/e2e-container-runtime-5729 pod/termination-message-containereeedcce5-c847-47c1-8fd1-5c73aeb18ad3 node/ostest-5xqm8-worker-0-rzx47 container/termination-message-container container exited with code 1 (Error): DONE
Sep 09 07:58:25.020 I ns/e2e-downward-api-9081 pod/downwardapi-volume-11d7c5ae-1e96-4518-b9a1-dc5425ca69ce node/ reason/Created
Sep 09 07:58:25.075 W ns/e2e-container-runtime-5729 pod/termination-message-containereeedcce5-c847-47c1-8fd1-5c73aeb18ad3 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:58:25.087 I ns/e2e-downward-api-9081 pod/downwardapi-volume-11d7c5ae-1e96-4518-b9a1-dc5425ca69ce node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:58:26.292 I ns/e2e-subpath-7911 pod/pod-subpath-test-configmap-4g7g node/ reason/Created
Sep 09 07:58:26.329 I ns/e2e-subpath-7911 pod/pod-subpath-test-configmap-4g7g node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:58:26.664 W ns/e2e-container-runtime-5729 pod/termination-message-containereeedcce5-c847-47c1-8fd1-5c73aeb18ad3 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:58:32.511 W ns/e2e-services-1626 pod/externalsvc-5px76 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 07:58:33.094 W ns/e2e-webhook-1376 pod/sample-webhook-deployment-7bc8486f8c-gxxmb node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:58:34.284 I ns/e2e-kubectl-9928 pod/logs-generator node/ reason/Created
Sep 09 07:58:34.335 I ns/e2e-kubectl-9928 pod/logs-generator node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:58:34.403 I ns/e2e-emptydir-4507 pod/pod-086c02e3-6699-478e-99b2-c488837e0f0d node/ reason/Created
Sep 09 07:58:34.483 I ns/e2e-emptydir-4507 pod/pod-086c02e3-6699-478e-99b2-c488837e0f0d node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:58:34.674 W ns/e2e-webhook-1376 pod/sample-webhook-deployment-7bc8486f8c-gxxmb node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:58:34.674 W ns/e2e-webhook-1376 pod/sample-webhook-deployment-7bc8486f8c-gxxmb node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/NotReady
Sep 09 07:58:37.305 W ns/e2e-webhook-1376 pod/sample-webhook-deployment-7bc8486f8c-gxxmb node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:58:37.722 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c reason/AddedInterface Add eth0 [10.128.154.77/23]
Sep 09 07:58:38.070 I ns/e2e-projected-6756 pod/annotationupdated7b7c3b6-f2e6-4adb-b11e-17e09a829ace reason/AddedInterface Add eth0 [10.128.131.205/23]
Sep 09 07:58:38.727 I ns/e2e-projected-6756 pod/annotationupdated7b7c3b6-f2e6-4adb-b11e-17e09a829ace node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:58:39.001 I ns/e2e-projected-6756 pod/annotationupdated7b7c3b6-f2e6-4adb-b11e-17e09a829ace node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Created
Sep 09 07:58:39.043 I ns/e2e-projected-6756 pod/annotationupdated7b7c3b6-f2e6-4adb-b11e-17e09a829ace node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Started
Sep 09 07:58:39.084 W ns/e2e-services-1626 pod/execpodcmtkx node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:58:39.752 I ns/e2e-projected-6756 pod/annotationupdated7b7c3b6-f2e6-4adb-b11e-17e09a829ace node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Ready
Sep 09 07:58:40.453 W ns/e2e-services-1626 pod/execpodcmtkx node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:58:40.453 W ns/e2e-services-1626 pod/execpodcmtkx node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/NotReady
Sep 09 07:58:43.317 W ns/e2e-services-1626 pod/execpodcmtkx node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:58:43.512 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Pulling image/docker.io/library/busybox:1.29
Sep 09 07:58:44.407 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 reason/AddedInterface Add eth0 [10.128.135.43/23]
Sep 09 07:58:45.157 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 container/sample-apiserver reason/Pulling image/gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17
Sep 09 07:58:45.266 I ns/e2e-security-context-test-759 pod/alpine-nnp-false-b9d1bdd9-2dc1-415e-b287-e96ca44473ef node/ reason/Created
Sep 09 07:58:45.341 I ns/e2e-security-context-test-759 pod/alpine-nnp-false-b9d1bdd9-2dc1-415e-b287-e96ca44473ef node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:58:49.057 W ns/e2e-projected-6756 pod/annotationupdated7b7c3b6-f2e6-4adb-b11e-17e09a829ace node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 07:58:50.757 W ns/e2e-projected-6756 pod/annotationupdated7b7c3b6-f2e6-4adb-b11e-17e09a829ace node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:58:50.757 W ns/e2e-projected-6756 pod/annotationupdated7b7c3b6-f2e6-4adb-b11e-17e09a829ace node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/NotReady
Sep 09 07:58:51.531 I ns/e2e-webhook-4105 pod/sample-webhook-deployment-7bc8486f8c-5qfcl reason/AddedInterface Add eth0 [10.128.121.39/23]
Sep 09 07:58:51.740 W ns/e2e-projected-6756 pod/annotationupdated7b7c3b6-f2e6-4adb-b11e-17e09a829ace node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:58:52.158 I ns/e2e-webhook-4105 pod/sample-webhook-deployment-7bc8486f8c-5qfcl node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:58:52.277 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 container/sample-apiserver reason/Pulled image/gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17
Sep 09 07:58:52.457 I ns/e2e-webhook-4105 pod/sample-webhook-deployment-7bc8486f8c-5qfcl node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Created
Sep 09 07:58:52.520 I ns/e2e-webhook-4105 pod/sample-webhook-deployment-7bc8486f8c-5qfcl node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Started
Sep 09 07:58:52.601 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 container/sample-apiserver reason/Created
Sep 09 07:58:52.656 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 container/sample-apiserver reason/Started
Sep 09 07:58:52.666 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 container/etcd reason/Pulling image/k8s.gcr.io/etcd:3.4.9
Sep 09 07:58:53.139 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 07:58:54.341 I ns/e2e-webhook-4105 pod/sample-webhook-deployment-7bc8486f8c-5qfcl node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Ready
Sep 09 07:58:54.606 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Created
Sep 09 07:58:54.687 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Started
Sep 09 07:58:55.100 I ns/e2e-downward-api-9081 pod/downwardapi-volume-11d7c5ae-1e96-4518-b9a1-dc5425ca69ce reason/AddedInterface Add eth0 [10.128.160.174/23]
Sep 09 07:58:55.509 I ns/e2e-subpath-7911 pod/pod-subpath-test-configmap-4g7g reason/AddedInterface Add eth0 [10.128.150.14/23]
Sep 09 07:58:55.517 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 07:58:55.557 E ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 init container exited with code 1 (Error): 
Sep 09 07:58:55.889 I ns/e2e-downward-api-9081 pod/downwardapi-volume-11d7c5ae-1e96-4518-b9a1-dc5425ca69ce node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:58:56.245 I ns/e2e-downward-api-9081 pod/downwardapi-volume-11d7c5ae-1e96-4518-b9a1-dc5425ca69ce node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Created
Sep 09 07:58:56.271 I ns/e2e-downward-api-9081 pod/downwardapi-volume-11d7c5ae-1e96-4518-b9a1-dc5425ca69ce node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Started
Sep 09 07:58:56.426 I ns/e2e-subpath-7911 pod/pod-subpath-test-configmap-4g7g node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-configmap-4g7g reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:58:56.965 I ns/e2e-subpath-7911 pod/pod-subpath-test-configmap-4g7g node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-configmap-4g7g reason/Created
Sep 09 07:58:56.989 W ns/e2e-webhook-4105 pod/sample-webhook-deployment-7bc8486f8c-5qfcl node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:58:57.063 I ns/e2e-subpath-7911 pod/pod-subpath-test-configmap-4g7g node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-configmap-4g7g reason/Started
Sep 09 07:58:57.074 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Created
Sep 09 07:58:57.127 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Started
Sep 09 07:58:57.224 I ns/e2e-webhook-8416 pod/sample-webhook-deployment-7bc8486f8c-24s6z reason/AddedInterface Add eth0 [10.128.145.104/23]
Sep 09 07:58:57.632 I ns/e2e-container-lifecycle-hook-5163 pod/pod-handle-http-request reason/AddedInterface Add eth0 [10.128.146.129/23]
Sep 09 07:58:57.632 W ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 init container restarted
Sep 09 07:58:57.644 W ns/e2e-downward-api-9081 pod/downwardapi-volume-11d7c5ae-1e96-4518-b9a1-dc5425ca69ce node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:58:57.748 W ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 reason/BackOff Back-off restarting failed container
Sep 09 07:58:57.887 I ns/e2e-subpath-7911 pod/pod-subpath-test-configmap-4g7g node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-configmap-4g7g reason/Ready
Sep 09 07:58:57.937 I ns/e2e-webhook-8416 pod/sample-webhook-deployment-7bc8486f8c-24s6z node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:58:58.218 I ns/e2e-webhook-8416 pod/sample-webhook-deployment-7bc8486f8c-24s6z node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Created
Sep 09 07:58:58.271 I ns/e2e-webhook-8416 pod/sample-webhook-deployment-7bc8486f8c-24s6z node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Started
Sep 09 07:58:58.412 I ns/e2e-container-lifecycle-hook-5163 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 container/pod-handle-http-request reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:58:58.552 W ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 reason/BackOff Back-off restarting failed container (2 times)
Sep 09 07:58:58.569 W ns/e2e-webhook-4105 pod/sample-webhook-deployment-7bc8486f8c-5qfcl node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:58:58.569 W ns/e2e-webhook-4105 pod/sample-webhook-deployment-7bc8486f8c-5qfcl node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/NotReady
Sep 09 07:58:58.706 I ns/e2e-container-lifecycle-hook-5163 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 container/pod-handle-http-request reason/Created
Sep 09 07:58:58.789 I ns/e2e-container-lifecycle-hook-5163 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 container/pod-handle-http-request reason/Started
Sep 09 07:58:58.823 I ns/e2e-container-lifecycle-hook-5163 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 container/pod-handle-http-request reason/Ready
Sep 09 07:58:59.210 I ns/e2e-container-lifecycle-hook-5163 pod/pod-with-prestop-exec-hook node/ reason/Created
Sep 09 07:58:59.220 I ns/e2e-webhook-8416 pod/sample-webhook-deployment-7bc8486f8c-24s6z node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Ready
Sep 09 07:58:59.281 I ns/e2e-container-lifecycle-hook-5163 pod/pod-with-prestop-exec-hook node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:59:00.021 I ns/e2e-projected-4151 pod/pod-projected-secrets-601f5c38-3f13-4bc8-ac02-55ccbc94d788 node/ reason/Created
Sep 09 07:59:00.041 W ns/e2e-webhook-4105 pod/sample-webhook-deployment-7bc8486f8c-5qfcl node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:59:00.124 I ns/e2e-projected-4151 pod/pod-projected-secrets-601f5c38-3f13-4bc8-ac02-55ccbc94d788 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:59:00.866 W ns/e2e-downward-api-9081 pod/downwardapi-volume-11d7c5ae-1e96-4518-b9a1-dc5425ca69ce node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:59:02.314 W ns/e2e-webhook-8416 pod/sample-webhook-deployment-7bc8486f8c-24s6z node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:59:03.255 I ns/e2e-container-probe-1700 pod/test-webserver-c572a488-855d-482e-9311-1c58c80d0223 node/ reason/Created
Sep 09 07:59:03.334 I ns/e2e-container-probe-1700 pod/test-webserver-c572a488-855d-482e-9311-1c58c80d0223 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:59:03.472 I ns/e2e-kubelet-test-332 pod/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 node/ reason/Created
Sep 09 07:59:03.560 I ns/e2e-kubelet-test-332 pod/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:59:03.639 W ns/e2e-webhook-8416 pod/sample-webhook-deployment-7bc8486f8c-24s6z node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:59:03.639 W ns/e2e-webhook-8416 pod/sample-webhook-deployment-7bc8486f8c-24s6z node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/NotReady
Sep 09 07:59:07.339 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 container/etcd reason/Pulled image/k8s.gcr.io/etcd:3.4.9
Sep 09 07:59:07.605 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 container/etcd reason/Created
Sep 09 07:59:07.689 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 container/etcd reason/Started
Sep 09 07:59:07.898 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 container/etcd reason/Ready
Sep 09 07:59:07.898 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 container/sample-apiserver reason/Ready
Sep 09 07:59:08.000 I ns/e2e-emptydir-4507 pod/pod-086c02e3-6699-478e-99b2-c488837e0f0d reason/AddedInterface Add eth0 [10.128.162.218/23]
Sep 09 07:59:08.091 W ns/e2e-webhook-8416 pod/sample-webhook-deployment-7bc8486f8c-24s6z node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:59:08.595 I ns/e2e-kubectl-9928 pod/logs-generator reason/AddedInterface Add eth0 [10.128.140.169/23]
Sep 09 07:59:08.797 I ns/e2e-emptydir-4507 pod/pod-086c02e3-6699-478e-99b2-c488837e0f0d node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:59:09.081 I ns/e2e-emptydir-4507 pod/pod-086c02e3-6699-478e-99b2-c488837e0f0d node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Created
Sep 09 07:59:09.187 I ns/e2e-emptydir-4507 pod/pod-086c02e3-6699-478e-99b2-c488837e0f0d node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Started
Sep 09 07:59:09.284 I ns/e2e-kubectl-9928 pod/logs-generator node/ostest-5xqm8-worker-0-rzx47 container/logs-generator reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:59:09.548 I ns/e2e-kubectl-9928 pod/logs-generator node/ostest-5xqm8-worker-0-rzx47 container/logs-generator reason/Created
Sep 09 07:59:09.586 I ns/e2e-kubectl-9928 pod/logs-generator node/ostest-5xqm8-worker-0-rzx47 container/logs-generator reason/Started
Sep 09 07:59:09.922 I ns/e2e-kubectl-9928 pod/logs-generator node/ostest-5xqm8-worker-0-rzx47 container/logs-generator reason/Ready
Sep 09 07:59:10.765 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 07:59:10.878 W ns/e2e-emptydir-4507 pod/pod-086c02e3-6699-478e-99b2-c488837e0f0d node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:59:12.224 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Created
Sep 09 07:59:12.351 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Started
Sep 09 07:59:12.959 W ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 reason/BackOff Back-off restarting failed container (3 times)
Sep 09 07:59:12.971 E ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 init container exited with code 1 (Error): 
Sep 09 07:59:12.971 W ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 init container restarted
Sep 09 07:59:15.032 W ns/e2e-kubectl-9928 pod/logs-generator node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 07:59:15.075 I ns/e2e-kubectl-9928 pod/logs-generator node/ostest-5xqm8-worker-0-rzx47 container/logs-generator reason/Killing
Sep 09 07:59:15.916 W ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:59:15.979 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 container/sample-apiserver reason/Killing
Sep 09 07:59:16.023 I ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 container/etcd reason/Killing
Sep 09 07:59:16.967 W ns/e2e-kubectl-9928 pod/logs-generator node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:59:16.967 W ns/e2e-kubectl-9928 pod/logs-generator node/ostest-5xqm8-worker-0-rzx47 container/logs-generator reason/NotReady
Sep 09 07:59:17.818 I ns/e2e-statefulset-1128 pod/ss-0 node/ reason/Created
Sep 09 07:59:17.836 I ns/e2e-statefulset-1128 statefulset/ss reason/SuccessfulCreate create Pod ss-0 in StatefulSet ss successful
Sep 09 07:59:17.921 I ns/e2e-statefulset-1128 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:59:18.048 W ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:59:18.048 W ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 container/etcd reason/NotReady
Sep 09 07:59:18.048 W ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 container/sample-apiserver reason/NotReady
Sep 09 07:59:18.865 W ns/e2e-subpath-7911 pod/pod-subpath-test-configmap-4g7g node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:59:18.969 W ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 07:59:18.969 - 44s   W ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 07:59:19.197 W ns/e2e-emptydir-4507 pod/pod-086c02e3-6699-478e-99b2-c488837e0f0d node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:59:20.238 W ns/e2e-kubectl-9928 pod/logs-generator node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:59:20.322 I ns/e2e-security-context-test-759 pod/alpine-nnp-false-b9d1bdd9-2dc1-415e-b287-e96ca44473ef reason/AddedInterface Add eth0 [10.128.133.141/23]
Sep 09 07:59:20.586 W ns/e2e-subpath-7911 pod/pod-subpath-test-configmap-4g7g node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:59:20.907 I ns/e2e-security-context-test-759 pod/alpine-nnp-false-b9d1bdd9-2dc1-415e-b287-e96ca44473ef node/ostest-5xqm8-worker-0-rzx47 container/alpine-nnp-false-b9d1bdd9-2dc1-415e-b287-e96ca44473ef reason/Pulling image/gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0
Sep 09 07:59:22.178 I ns/e2e-job-8004 pod/adopt-release-dz85k node/ reason/Created
Sep 09 07:59:22.330 I ns/e2e-job-8004 job/adopt-release reason/SuccessfulCreate Created pod: adopt-release-dz85k
Sep 09 07:59:22.404 I ns/e2e-job-8004 pod/adopt-release-dz85k node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:59:22.404 I ns/e2e-job-8004 pod/adopt-release-z5k4f node/ reason/Created
Sep 09 07:59:22.465 I ns/e2e-job-8004 job/adopt-release reason/SuccessfulCreate Created pod: adopt-release-z5k4f
Sep 09 07:59:22.569 I ns/e2e-job-8004 pod/adopt-release-z5k4f node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:59:22.995 I ns/e2e-downward-api-7986 pod/downward-api-a95c28ec-4f11-45a7-8267-3d97e3054c3f node/ reason/Created
Sep 09 07:59:23.061 I ns/e2e-downward-api-7986 pod/downward-api-a95c28ec-4f11-45a7-8267-3d97e3054c3f node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 07:59:23.754 I ns/e2e-crd-webhook-3879 deployment/sample-crd-conversion-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-crd-conversion-webhook-deployment-84c84cf5f9 to 1
Sep 09 07:59:23.843 I ns/e2e-crd-webhook-3879 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-27s9z node/ reason/Created
Sep 09 07:59:23.888 I ns/e2e-crd-webhook-3879 replicaset/sample-crd-conversion-webhook-deployment-84c84cf5f9 reason/SuccessfulCreate Created pod: sample-crd-conversion-webhook-deployment-84c84cf5f9-27s9z
Sep 09 07:59:23.924 I ns/e2e-crd-webhook-3879 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-27s9z node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 07:59:25.608 I ns/e2e-security-context-test-759 pod/alpine-nnp-false-b9d1bdd9-2dc1-415e-b287-e96ca44473ef node/ostest-5xqm8-worker-0-rzx47 container/alpine-nnp-false-b9d1bdd9-2dc1-415e-b287-e96ca44473ef reason/Pulled image/gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0
Sep 09 07:59:25.890 I ns/e2e-security-context-test-759 pod/alpine-nnp-false-b9d1bdd9-2dc1-415e-b287-e96ca44473ef node/ostest-5xqm8-worker-0-rzx47 container/alpine-nnp-false-b9d1bdd9-2dc1-415e-b287-e96ca44473ef reason/Created
Sep 09 07:59:25.991 I ns/e2e-security-context-test-759 pod/alpine-nnp-false-b9d1bdd9-2dc1-415e-b287-e96ca44473ef node/ostest-5xqm8-worker-0-rzx47 container/alpine-nnp-false-b9d1bdd9-2dc1-415e-b287-e96ca44473ef reason/Started
Sep 09 07:59:26.767 W ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 reason/BackOff Back-off restarting failed container (4 times)
Sep 09 07:59:27.371 I ns/e2e-container-lifecycle-hook-5163 pod/pod-with-prestop-exec-hook reason/AddedInterface Add eth0 [10.128.147.108/23]
Sep 09 07:59:28.096 I ns/e2e-container-lifecycle-hook-5163 pod/pod-with-prestop-exec-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-prestop-exec-hook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:59:28.182 W ns/e2e-aggregator-2390 pod/sample-apiserver-deployment-67c46cd746-r4dm6 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:59:28.353 I ns/e2e-container-lifecycle-hook-5163 pod/pod-with-prestop-exec-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-prestop-exec-hook reason/Created
Sep 09 07:59:28.425 I ns/e2e-container-lifecycle-hook-5163 pod/pod-with-prestop-exec-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-prestop-exec-hook reason/Started
Sep 09 07:59:28.805 I ns/e2e-container-lifecycle-hook-5163 pod/pod-with-prestop-exec-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-prestop-exec-hook reason/Ready
Sep 09 07:59:29.307 W ns/e2e-container-lifecycle-hook-5163 pod/pod-with-prestop-exec-hook node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 15s
Sep 09 07:59:30.726 I ns/e2e-container-lifecycle-hook-5163 pod/pod-with-prestop-exec-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-prestop-exec-hook reason/Killing
Sep 09 07:59:32.795 W ns/e2e-container-lifecycle-hook-5163 pod/pod-with-prestop-exec-hook node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 07:59:32.795 W ns/e2e-container-lifecycle-hook-5163 pod/pod-with-prestop-exec-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-prestop-exec-hook reason/NotReady
Sep 09 07:59:33.771 W ns/e2e-security-context-test-759 pod/alpine-nnp-false-b9d1bdd9-2dc1-415e-b287-e96ca44473ef node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 07:59:38.131 W ns/e2e-security-context-test-759 pod/alpine-nnp-false-b9d1bdd9-2dc1-415e-b287-e96ca44473ef node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:59:39.320 W ns/e2e-container-lifecycle-hook-5163 pod/pod-with-prestop-exec-hook node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:59:39.772 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 07:59:40.166 I ns/e2e-projected-4151 pod/pod-projected-secrets-601f5c38-3f13-4bc8-ac02-55ccbc94d788 reason/AddedInterface Add eth0 [10.128.122.233/23]
Sep 09 07:59:40.506 I ns/e2e-emptydir-8152 pod/pod-6ce3ff20-87e4-4226-a27d-54e679cf17d8 node/ reason/Created
Sep 09 07:59:40.612 I ns/e2e-webhook-5534 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 07:59:40.955 I ns/e2e-emptydir-8152 pod/pod-6ce3ff20-87e4-4226-a27d-54e679cf17d8 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:59:41.061 I ns/e2e-projected-4151 pod/pod-projected-secrets-601f5c38-3f13-4bc8-ac02-55ccbc94d788 node/ostest-5xqm8-worker-0-cbbx9 container/projected-secret-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 07:59:41.272 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Created
Sep 09 07:59:41.386 I ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Started
Sep 09 07:59:41.411 I ns/e2e-webhook-5534 pod/sample-webhook-deployment-7bc8486f8c-q5hmf node/ reason/Created
Sep 09 07:59:41.477 I ns/e2e-webhook-5534 pod/sample-webhook-deployment-7bc8486f8c-q5hmf node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:59:41.484 I ns/e2e-projected-4151 pod/pod-projected-secrets-601f5c38-3f13-4bc8-ac02-55ccbc94d788 node/ostest-5xqm8-worker-0-cbbx9 container/projected-secret-volume-test reason/Created
Sep 09 07:59:41.508 I ns/e2e-projected-4151 pod/pod-projected-secrets-601f5c38-3f13-4bc8-ac02-55ccbc94d788 node/ostest-5xqm8-worker-0-cbbx9 container/projected-secret-volume-test reason/Started
Sep 09 07:59:41.692 I ns/e2e-webhook-5534 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-q5hmf
Sep 09 07:59:41.924 W ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 reason/BackOff Back-off restarting failed container (5 times)
Sep 09 07:59:42.066 E ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 init container exited with code 1 (Error): 
Sep 09 07:59:42.066 W ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 container/init1 init container restarted
Sep 09 07:59:43.053 W ns/e2e-projected-4151 pod/pod-projected-secrets-601f5c38-3f13-4bc8-ac02-55ccbc94d788 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 07:59:48.006 W ns/e2e-projected-4151 pod/pod-projected-secrets-601f5c38-3f13-4bc8-ac02-55ccbc94d788 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 07:59:50.871 W ns/e2e-container-lifecycle-hook-5163 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 07:59:52.208 E ns/e2e-container-lifecycle-hook-5163 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 container/pod-handle-http-request container exited with code 2 (Error): 
Sep 09 07:59:53.501 W ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 07:59:57.300 W ns/e2e-container-lifecycle-hook-5163 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 07:59:57.693 I ns/e2e-kubelet-test-332 pod/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 reason/AddedInterface Add eth0 [10.128.143.126/23]
Sep 09 07:59:58.025 I ns/e2e-statefulset-1128 pod/ss-0 reason/AddedInterface Add eth0 [10.128.148.14/23]
Sep 09 07:59:58.487 I ns/e2e-kubelet-test-332 pod/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 07:59:58.836 I ns/e2e-kubelet-test-332 pod/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 reason/Created
Sep 09 07:59:58.956 I ns/e2e-kubelet-test-332 pod/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 reason/Started
Sep 09 07:59:58.966 I ns/e2e-statefulset-1128 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulling image/docker.io/library/httpd:2.4.38-alpine
Sep 09 07:59:59.043 I ns/e2e-kubelet-test-332 pod/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 reason/Ready
Sep 09 07:59:59.407 I ns/e2e-configmap-8748 pod/pod-configmaps-639ff57e-a379-4236-9974-03b24556104f node/ reason/Created
Sep 09 07:59:59.517 I ns/e2e-configmap-8748 pod/pod-configmaps-639ff57e-a379-4236-9974-03b24556104f node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 07:59:59.573 I ns/e2e-container-probe-1700 pod/test-webserver-c572a488-855d-482e-9311-1c58c80d0223 reason/AddedInterface Add eth0 [10.128.125.110/23]
Sep 09 08:00:00.358 I ns/e2e-container-probe-1700 pod/test-webserver-c572a488-855d-482e-9311-1c58c80d0223 node/ostest-5xqm8-worker-0-cbbx9 container/test-webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:00:00.624 I ns/e2e-container-probe-1700 pod/test-webserver-c572a488-855d-482e-9311-1c58c80d0223 node/ostest-5xqm8-worker-0-cbbx9 container/test-webserver reason/Created
Sep 09 08:00:00.688 I ns/e2e-container-probe-1700 pod/test-webserver-c572a488-855d-482e-9311-1c58c80d0223 node/ostest-5xqm8-worker-0-cbbx9 container/test-webserver reason/Started
Sep 09 08:00:00.943 I ns/e2e-configmap-4209 pod/pod-configmaps-8274a646-88b9-4920-87d4-d31be6ed9b67 node/ reason/Created
Sep 09 08:00:00.988 I ns/e2e-configmap-4209 pod/pod-configmaps-8274a646-88b9-4920-87d4-d31be6ed9b67 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:00:01.004 I ns/e2e-container-probe-1700 pod/test-webserver-c572a488-855d-482e-9311-1c58c80d0223 node/ostest-5xqm8-worker-0-cbbx9 container/test-webserver reason/Ready
Sep 09 08:00:02.594 I ns/e2e-job-8004 pod/adopt-release-z5k4f reason/AddedInterface Add eth0 [10.128.127.146/23]
Sep 09 08:00:02.903 I ns/e2e-downward-api-7986 pod/downward-api-a95c28ec-4f11-45a7-8267-3d97e3054c3f reason/AddedInterface Add eth0 [10.128.156.47/23]
Sep 09 08:00:03.086 I ns/e2e-crd-webhook-3879 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-27s9z reason/AddedInterface Add eth0 [10.128.153.114/23]
Sep 09 08:00:03.185 I ns/e2e-job-8004 pod/adopt-release-z5k4f node/ostest-5xqm8-worker-0-rzx47 container/c reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:00:03.470 I ns/e2e-job-8004 pod/adopt-release-z5k4f node/ostest-5xqm8-worker-0-rzx47 container/c reason/Created
Sep 09 08:00:03.507 I ns/e2e-job-8004 pod/adopt-release-z5k4f node/ostest-5xqm8-worker-0-rzx47 container/c reason/Started
Sep 09 08:00:03.630 I ns/e2e-downward-api-7986 pod/downward-api-a95c28ec-4f11-45a7-8267-3d97e3054c3f node/ostest-5xqm8-worker-0-twrlr container/dapi-container reason/Pulling image/docker.io/library/busybox:1.29
Sep 09 08:00:03.797 I ns/e2e-crd-webhook-3879 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-27s9z node/ostest-5xqm8-worker-0-cbbx9 container/sample-crd-conversion-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:00:04.061 I ns/e2e-crd-webhook-3879 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-27s9z node/ostest-5xqm8-worker-0-cbbx9 container/sample-crd-conversion-webhook reason/Created
Sep 09 08:00:04.113 I ns/e2e-crd-webhook-3879 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-27s9z node/ostest-5xqm8-worker-0-cbbx9 container/sample-crd-conversion-webhook reason/Started
Sep 09 08:00:04.196 I ns/e2e-job-8004 pod/adopt-release-z5k4f node/ostest-5xqm8-worker-0-rzx47 container/c reason/Ready
Sep 09 08:00:05.489 I ns/e2e-job-8004 pod/adopt-release-dz85k reason/AddedInterface Add eth0 [10.128.127.38/23]
Sep 09 08:00:05.663 I ns/e2e-crd-webhook-3879 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-27s9z node/ostest-5xqm8-worker-0-cbbx9 container/sample-crd-conversion-webhook reason/Ready
Sep 09 08:00:05.736 W ns/e2e-kubelet-test-332 pod/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:00:06.218 I ns/e2e-job-8004 pod/adopt-release-dz85k node/ostest-5xqm8-worker-0-rzx47 container/c reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:00:06.494 I ns/e2e-job-8004 pod/adopt-release-dz85k node/ostest-5xqm8-worker-0-rzx47 container/c reason/Created
Sep 09 08:00:06.551 I ns/e2e-job-8004 pod/adopt-release-dz85k node/ostest-5xqm8-worker-0-rzx47 container/c reason/Started
Sep 09 08:00:06.885 W ns/e2e-init-container-353 pod/pod-init-f71b7def-b78e-45c2-b00c-204f28f13f0c node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:00:07.265 I ns/e2e-job-8004 pod/adopt-release-dz85k node/ostest-5xqm8-worker-0-rzx47 container/c reason/Ready
Sep 09 08:00:08.988 W ns/e2e-crd-webhook-3879 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-27s9z node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:00:10.181 I ns/e2e-projected-2258 pod/downwardapi-volume-238d58cf-2ff4-4edd-b234-14af6c88c7f2 node/ reason/Created
Sep 09 08:00:10.244 I ns/e2e-projected-2258 pod/downwardapi-volume-238d58cf-2ff4-4edd-b234-14af6c88c7f2 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:00:11.202 I ns/e2e-job-8004 pod/adopt-release-bjtmj node/ reason/Created
Sep 09 08:00:11.914 W ns/e2e-crd-webhook-3879 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-27s9z node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:00:12.519 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ reason/Created
Sep 09 08:00:12.812 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:00:12.831 I ns/e2e-downward-api-7986 pod/downward-api-a95c28ec-4f11-45a7-8267-3d97e3054c3f node/ostest-5xqm8-worker-0-twrlr container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:00:13.151 I ns/e2e-downward-api-7986 pod/downward-api-a95c28ec-4f11-45a7-8267-3d97e3054c3f node/ostest-5xqm8-worker-0-twrlr container/dapi-container reason/Created
Sep 09 08:00:13.244 I ns/e2e-downward-api-7986 pod/downward-api-a95c28ec-4f11-45a7-8267-3d97e3054c3f node/ostest-5xqm8-worker-0-twrlr container/dapi-container reason/Started
Sep 09 08:00:13.342 I ns/e2e-statefulset-1128 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:00:13.694 I ns/e2e-statefulset-1128 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:00:13.752 I ns/e2e-statefulset-1128 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:00:13.856 W ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 reason/FailedMount MountVolume.SetUp failed for volume "updates-volume" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:00:13.929 W ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 reason/FailedMount MountVolume.SetUp failed for volume "deletes-volume" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:00:14.013 W ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 reason/FailedMount MountVolume.SetUp failed for volume "default-token-q7bpf" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:00:14.248 I ns/e2e-statefulset-1128 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 08:00:15.958 I ns/e2e-emptydir-8152 pod/pod-6ce3ff20-87e4-4226-a27d-54e679cf17d8 reason/AddedInterface Add eth0 [10.128.164.58/23]
Sep 09 08:00:16.128 W ns/e2e-downward-api-7986 pod/downward-api-a95c28ec-4f11-45a7-8267-3d97e3054c3f node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:00:16.673 I ns/e2e-emptydir-8152 pod/pod-6ce3ff20-87e4-4226-a27d-54e679cf17d8 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:00:16.973 I ns/e2e-emptydir-8152 pod/pod-6ce3ff20-87e4-4226-a27d-54e679cf17d8 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:00:17.018 I ns/e2e-emptydir-8152 pod/pod-6ce3ff20-87e4-4226-a27d-54e679cf17d8 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:00:17.942 I ns/e2e-statefulset-1128 pod/ss-1 node/ reason/Created
Sep 09 08:00:18.020 I ns/e2e-statefulset-1128 pod/ss-1 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:00:18.038 I ns/e2e-statefulset-1128 statefulset/ss reason/SuccessfulCreate create Pod ss-1 in StatefulSet ss successful
Sep 09 08:00:18.085 W ns/e2e-downward-api-7986 pod/downward-api-a95c28ec-4f11-45a7-8267-3d97e3054c3f node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:00:18.142 W ns/e2e-statefulset-1128 pod/ss-1 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:00:18.212 I ns/e2e-statefulset-1128 statefulset/ss reason/SuccessfulDelete delete Pod ss-1 in StatefulSet ss successful
Sep 09 04:00:18.284 - 310s  I test="[sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" running
Sep 09 08:00:19.322 W ns/e2e-emptydir-8152 pod/pod-6ce3ff20-87e4-4226-a27d-54e679cf17d8 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:00:20.010 W ns/e2e-job-8004 pod/adopt-release-bjtmj node/ reason/GracefulDelete in 0s
Sep 09 08:00:20.024 W ns/e2e-job-8004 pod/adopt-release-bjtmj node/ reason/Deleted
Sep 09 08:00:20.045 W ns/e2e-job-8004 pod/adopt-release-dz85k node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:00:20.070 W ns/e2e-job-8004 pod/adopt-release-z5k4f node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:00:20.136 I ns/e2e-downward-api-379 pod/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 node/ reason/Created
Sep 09 08:00:20.432 I ns/e2e-downward-api-379 pod/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:00:20.683 I ns/e2e-webhook-5534 pod/sample-webhook-deployment-7bc8486f8c-q5hmf reason/AddedInterface Add eth0 [10.128.139.50/23]
Sep 09 08:00:21.371 I ns/e2e-webhook-5534 pod/sample-webhook-deployment-7bc8486f8c-q5hmf node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:00:21.634 I ns/e2e-webhook-5534 pod/sample-webhook-deployment-7bc8486f8c-q5hmf node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Created
Sep 09 08:00:21.753 I ns/e2e-webhook-5534 pod/sample-webhook-deployment-7bc8486f8c-q5hmf node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Started
Sep 09 08:00:21.887 W ns/e2e-job-8004 pod/adopt-release-dz85k node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:00:22.079 W ns/e2e-job-8004 pod/adopt-release-z5k4f node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:00:22.734 I ns/e2e-webhook-5534 pod/sample-webhook-deployment-7bc8486f8c-q5hmf node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Ready
Sep 09 08:00:22.875 W ns/e2e-emptydir-8152 pod/pod-6ce3ff20-87e4-4226-a27d-54e679cf17d8 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 04:00:23.392 - 314s  I test="[sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" running
Sep 09 08:00:24.900 I ns/e2e-projected-1934 pod/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 node/ reason/Created
Sep 09 08:00:24.956 I ns/e2e-projected-1934 pod/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:00:25.900 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (38 times)
Sep 09 08:00:26.251 W ns/e2e-webhook-5534 pod/sample-webhook-deployment-7bc8486f8c-q5hmf node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:00:27.273 W ns/e2e-statefulset-1128 pod/ss-1 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:00:27.351 W ns/e2e-statefulset-1128 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:00:27.357 E ns/e2e-webhook-5534 pod/sample-webhook-deployment-7bc8486f8c-q5hmf node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook container exited with code 2 (Error): 
Sep 09 08:00:27.371 I ns/e2e-statefulset-1128 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Killing
Sep 09 08:00:27.410 I ns/e2e-statefulset-1128 statefulset/ss reason/SuccessfulDelete delete Pod ss-0 in StatefulSet ss successful
Sep 09 08:00:27.656 I ns/e2e-statefulset-858 pod/ss-0 node/ reason/Created
Sep 09 08:00:27.677 I ns/e2e-statefulset-858 statefulset/ss reason/SuccessfulCreate create Pod ss-0 in StatefulSet ss successful
Sep 09 08:00:27.756 W ns/e2e-webhook-5534 pod/sample-webhook-deployment-7bc8486f8c-q5hmf node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:00:27.763 I ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:00:29.144 W ns/e2e-statefulset-1128 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.128.148.14:80/index.html": dial tcp 10.128.148.14:80: i/o timeout (Client.Timeout exceeded while awaiting headers)
Sep 09 08:00:29.177 W ns/e2e-statefulset-1128 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/NotReady
Sep 09 08:00:29.243 W ns/e2e-statefulset-1128 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:00:29.595 I ns/e2e-configmap-8748 pod/pod-configmaps-639ff57e-a379-4236-9974-03b24556104f reason/AddedInterface Add eth0 [10.128.121.105/23]
Sep 09 08:00:30.157 W ns/e2e-statefulset-1128 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.128.148.14:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 09 08:00:30.396 I ns/e2e-configmap-8748 pod/pod-configmaps-639ff57e-a379-4236-9974-03b24556104f node/ostest-5xqm8-worker-0-rzx47 container/configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:00:30.723 I ns/e2e-configmap-8748 pod/pod-configmaps-639ff57e-a379-4236-9974-03b24556104f node/ostest-5xqm8-worker-0-rzx47 container/configmap-volume-test reason/Created
Sep 09 08:00:30.874 I ns/e2e-configmap-8748 pod/pod-configmaps-639ff57e-a379-4236-9974-03b24556104f node/ostest-5xqm8-worker-0-rzx47 container/configmap-volume-test reason/Started
Sep 09 08:00:31.488 W ns/openshift-operator-lifecycle-manager pod/packageserver-6bb6556b69-jpnn8 node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "https://10.128.5.10:5443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 09 08:00:31.695 W ns/openshift-operator-lifecycle-manager pod/packageserver-6bb6556b69-jpnn8 node/ostest-5xqm8-master-0 reason/Unhealthy Readiness probe failed: Get "https://10.128.5.10:5443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 09 08:00:31.839 W ns/e2e-configmap-8748 pod/pod-configmaps-639ff57e-a379-4236-9974-03b24556104f node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:00:33.943 W ns/e2e-configmap-8748 pod/pod-configmaps-639ff57e-a379-4236-9974-03b24556104f node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:00:33.968 W ns/e2e-statefulset-1128 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:00:35.053 I ns/e2e-configmap-4209 pod/pod-configmaps-8274a646-88b9-4920-87d4-d31be6ed9b67 reason/AddedInterface Add eth0 [10.128.168.146/23]
Sep 09 08:00:35.779 I ns/e2e-configmap-4209 pod/pod-configmaps-8274a646-88b9-4920-87d4-d31be6ed9b67 node/ostest-5xqm8-worker-0-cbbx9 container/env-test reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:00:36.185 I ns/e2e-configmap-4209 pod/pod-configmaps-8274a646-88b9-4920-87d4-d31be6ed9b67 node/ostest-5xqm8-worker-0-cbbx9 container/env-test reason/Created
Sep 09 08:00:36.296 I ns/e2e-configmap-4209 pod/pod-configmaps-8274a646-88b9-4920-87d4-d31be6ed9b67 node/ostest-5xqm8-worker-0-cbbx9 container/env-test reason/Started
Sep 09 08:00:36.805 W ns/e2e-statefulset-1128 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:00:37.313 W ns/e2e-kubelet-test-332 pod/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:00:37.313 W ns/e2e-kubelet-test-332 pod/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 reason/NotReady
Sep 09 08:00:37.764 W ns/e2e-configmap-4209 pod/pod-configmaps-8274a646-88b9-4920-87d4-d31be6ed9b67 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:00:37.810 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/NotReady
Sep 09 08:00:37.810 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Restarted
Sep 09 04:00:38.090 - 311s  I test="[sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" running
Sep 09 08:00:38.186 I ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ reason/Created
Sep 09 08:00:38.375 I ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:00:38.772 W clusteroperator/network changed Progressing to True: Deploying: Deployment "openshift-kuryr/kuryr-controller" is not available (awaiting 1 nodes)
Sep 09 08:00:40.340 I ns/e2e-projected-3633 pod/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 node/ reason/Created
Sep 09 08:00:40.487 I ns/e2e-projected-3633 pod/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:00:46.778 W ns/e2e-kubelet-test-332 pod/busybox-scheduling-1985deb8-4dbf-4c09-bbdb-8532a86e0eb8 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:01:18.968 - 15s   W ns/e2e-projected-2258 pod/downwardapi-volume-238d58cf-2ff4-4edd-b234-14af6c88c7f2 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:01:18.968 - 44s   W ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:01:33.969 - 224s  W ns/e2e-downward-api-379 pod/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:01:33.969 - 239s  W ns/e2e-projected-1934 pod/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:01:33.969 - 254s  W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:01:42.256 I ns/e2e-projected-2258 pod/downwardapi-volume-238d58cf-2ff4-4edd-b234-14af6c88c7f2 reason/AddedInterface Add eth0 [10.128.128.145/23]
Sep 09 08:01:42.554 W ns/e2e-configmap-4209 pod/pod-configmaps-8274a646-88b9-4920-87d4-d31be6ed9b67 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:01:43.082 I ns/e2e-projected-2258 pod/downwardapi-volume-238d58cf-2ff4-4edd-b234-14af6c88c7f2 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:01:43.420 I ns/e2e-projected-2258 pod/downwardapi-volume-238d58cf-2ff4-4edd-b234-14af6c88c7f2 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Created
Sep 09 08:01:43.488 I ns/e2e-projected-2258 pod/downwardapi-volume-238d58cf-2ff4-4edd-b234-14af6c88c7f2 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Started
Sep 09 08:01:43.852 W ns/e2e-projected-2258 pod/downwardapi-volume-238d58cf-2ff4-4edd-b234-14af6c88c7f2 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 04:01:43.949 - 304s  I test="[sig-apps] Deployment deployment should support proportional scaling [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" running
Sep 09 08:01:45.687 I ns/e2e-deployment-4302 deployment/webserver-deployment reason/ScalingReplicaSet Scaled up replica set webserver-deployment-dd94f59b7 to 10
Sep 09 08:01:45.759 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-48q9w node/ reason/Created
Sep 09 08:01:45.802 I ns/e2e-deployment-4302 replicaset/webserver-deployment-dd94f59b7 reason/SuccessfulCreate Created pod: webserver-deployment-dd94f59b7-48q9w
Sep 09 08:01:45.828 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p26k2 node/ reason/Created
Sep 09 08:01:45.853 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-t6zlg node/ reason/Created
Sep 09 08:01:45.871 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-48q9w node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:01:45.871 I ns/e2e-deployment-4302 replicaset/webserver-deployment-dd94f59b7 reason/SuccessfulCreate Created pod: webserver-deployment-dd94f59b7-p26k2
Sep 09 08:01:45.886 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p26k2 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:01:45.894 I ns/e2e-deployment-4302 replicaset/webserver-deployment-dd94f59b7 reason/SuccessfulCreate Created pod: webserver-deployment-dd94f59b7-t6zlg
Sep 09 08:01:45.928 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-t6zlg node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:01:45.952 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-hvwv8 node/ reason/Created
Sep 09 08:01:45.981 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-8pm7c node/ reason/Created
Sep 09 08:01:45.982 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-prjxj node/ reason/Created
Sep 09 08:01:45.982 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-pfc5z node/ reason/Created
Sep 09 08:01:45.994 I ns/e2e-deployment-4302 replicaset/webserver-deployment-dd94f59b7 reason/SuccessfulCreate Created pod: webserver-deployment-dd94f59b7-hvwv8
Sep 09 08:01:45.995 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-hvwv8 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:01:46.016 I ns/e2e-deployment-4302 replicaset/webserver-deployment-dd94f59b7 reason/SuccessfulCreate Created pod: webserver-deployment-dd94f59b7-8pm7c
Sep 09 08:01:46.022 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-sqscr node/ reason/Created
Sep 09 08:01:46.025 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p88mh node/ reason/Created
Sep 09 08:01:46.037 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-m4bk9 node/ reason/Created
Sep 09 08:01:46.056 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-8pm7c node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:01:46.077 I ns/e2e-deployment-4302 replicaset/webserver-deployment-dd94f59b7 reason/SuccessfulCreate Created pod: webserver-deployment-dd94f59b7-pfc5z
Sep 09 08:01:46.115 I ns/e2e-deployment-4302 replicaset/webserver-deployment-dd94f59b7 reason/SuccessfulCreate Created pod: webserver-deployment-dd94f59b7-prjxj
Sep 09 08:01:46.135 I ns/e2e-deployment-4302 replicaset/webserver-deployment-dd94f59b7 reason/SuccessfulCreate Created pod: webserver-deployment-dd94f59b7-p88mh
Sep 09 08:01:46.168 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-prjxj node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:01:46.179 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-pfc5z node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:01:46.187 I ns/e2e-deployment-4302 replicaset/webserver-deployment-dd94f59b7 reason/SuccessfulCreate Created pod: webserver-deployment-dd94f59b7-m4bk9
Sep 09 08:01:46.211 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-sqscr node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:01:46.211 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p88mh node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:01:46.212 W ns/e2e-projected-2258 pod/downwardapi-volume-238d58cf-2ff4-4edd-b234-14af6c88c7f2 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:01:46.222 I ns/e2e-deployment-4302 replicaset/webserver-deployment-dd94f59b7 reason/SuccessfulCreate (combined from similar events): Created pod: webserver-deployment-dd94f59b7-sqscr
Sep 09 08:01:46.223 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-m4bk9 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:01:46.257 I ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Ready
Sep 09 08:01:46.902 W clusteroperator/network changed Progressing to False
Sep 09 08:01:48.970 - 29s   W ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:01:48.970 - 224s  W ns/e2e-projected-3633 pod/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:01:49.024 I ns/e2e-container-probe-1434 pod/test-webserver-3cb89145-3f21-4126-b967-2c54052dd548 node/ reason/Created
Sep 09 08:01:49.119 I ns/e2e-container-probe-1434 pod/test-webserver-3cb89145-3f21-4126-b967-2c54052dd548 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:02:06.091 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-48q9w reason/AddedInterface Add eth0 [10.128.174.146/23]
Sep 09 08:02:06.255 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-t6zlg reason/AddedInterface Add eth0 [10.128.175.207/23]
Sep 09 08:02:06.502 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-prjxj reason/AddedInterface Add eth0 [10.128.174.14/23]
Sep 09 08:02:06.702 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-48q9w node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:07.003 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-48q9w node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Created
Sep 09 08:02:07.059 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-48q9w node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Started
Sep 09 08:02:07.274 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-t6zlg node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Pulling image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:07.529 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-prjxj node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Pulling image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:07.529 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p88mh reason/AddedInterface Add eth0 [10.128.175.209/23]
Sep 09 08:02:07.671 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-48q9w node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Ready
Sep 09 08:02:08.223 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p88mh node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Pulling image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:09.138 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p26k2 reason/AddedInterface Add eth0 [10.128.175.85/23]
Sep 09 08:02:09.394 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-sqscr reason/AddedInterface Add eth0 [10.128.174.154/23]
Sep 09 08:02:09.420 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-hvwv8 reason/AddedInterface Add eth0 [10.128.174.211/23]
Sep 09 08:02:09.824 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p26k2 node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Pulling image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:10.102 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-sqscr node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Pulling image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:10.137 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-hvwv8 node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:10.431 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-hvwv8 node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Created
Sep 09 08:02:10.504 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-hvwv8 node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Started
Sep 09 08:02:10.680 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-hvwv8 node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Ready
Sep 09 08:02:13.044 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-8pm7c reason/AddedInterface Add eth0 [10.128.175.87/23]
Sep 09 08:02:13.434 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e reason/AddedInterface Add eth0 [10.128.145.231/23]
Sep 09 08:02:13.505 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-pfc5z reason/AddedInterface Add eth0 [10.128.175.126/23]
Sep 09 08:02:13.714 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-8pm7c node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Pulling image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:14.112 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/dels-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:02:14.224 I ns/e2e-container-probe-1434 pod/test-webserver-3cb89145-3f21-4126-b967-2c54052dd548 reason/AddedInterface Add eth0 [10.128.177.239/23]
Sep 09 08:02:14.379 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-pfc5z node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:14.434 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/dels-volume-test reason/Created
Sep 09 08:02:14.543 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/dels-volume-test reason/Started
Sep 09 08:02:14.574 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/upds-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:02:14.862 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-pfc5z node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Created
Sep 09 08:02:14.977 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-pfc5z node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Started
Sep 09 08:02:14.998 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/upds-volume-test reason/Created
Sep 09 08:02:15.025 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/upds-volume-test reason/Started
Sep 09 08:02:15.038 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/creates-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:02:15.101 I ns/e2e-container-probe-1434 pod/test-webserver-3cb89145-3f21-4126-b967-2c54052dd548 node/ostest-5xqm8-worker-0-cbbx9 container/test-webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:02:15.453 I ns/e2e-container-probe-1434 pod/test-webserver-3cb89145-3f21-4126-b967-2c54052dd548 node/ostest-5xqm8-worker-0-cbbx9 container/test-webserver reason/Created
Sep 09 08:02:15.465 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/creates-volume-test reason/Created
Sep 09 08:02:15.526 I ns/e2e-container-probe-1434 pod/test-webserver-3cb89145-3f21-4126-b967-2c54052dd548 node/ostest-5xqm8-worker-0-cbbx9 container/test-webserver reason/Started
Sep 09 08:02:15.571 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/creates-volume-test reason/Started
Sep 09 08:02:15.801 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-pfc5z node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Ready
Sep 09 08:02:15.869 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/upds-volume-test reason/Ready
Sep 09 08:02:15.869 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/dels-volume-test reason/Ready
Sep 09 08:02:15.869 I ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/creates-volume-test reason/Ready
Sep 09 08:02:20.373 I ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 reason/AddedInterface Add eth0 [10.128.172.224/23]
Sep 09 08:02:21.102 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p88mh node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:21.146 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-prjxj node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:21.193 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-t6zlg node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:21.886 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p88mh node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Created
Sep 09 08:02:21.951 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p88mh node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Started
Sep 09 08:02:22.019 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-prjxj node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Created
Sep 09 08:02:22.036 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-t6zlg node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Created
Sep 09 08:02:22.098 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-t6zlg node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Started
Sep 09 08:02:22.162 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-prjxj node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Started
Sep 09 08:02:22.301 W ns/e2e-container-probe-1434 pod/test-webserver-3cb89145-3f21-4126-b967-2c54052dd548 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.128.177.239:81/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 09 08:02:22.911 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p26k2 node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:22.964 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-t6zlg node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Ready
Sep 09 08:02:22.965 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-sqscr node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:23.056 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-8pm7c node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:02:23.129 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-prjxj node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Ready
Sep 09 08:02:23.222 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p88mh node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Ready
Sep 09 08:02:23.388 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-8pm7c node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Created
Sep 09 08:02:23.445 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p26k2 node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Created
Sep 09 08:02:23.505 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-sqscr node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Created
Sep 09 08:02:23.554 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p26k2 node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Started
Sep 09 08:02:23.597 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-8pm7c node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Started
Sep 09 08:02:23.656 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-sqscr node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Started
Sep 09 08:02:23.877 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-8pm7c node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Ready
Sep 09 08:02:23.947 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-sqscr node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Ready
Sep 09 08:02:23.997 I ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p26k2 node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/Ready
Sep 09 08:02:25.941 W ns/e2e-downward-api-379 pod/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(2fa31363bc30ea41f1ffff96f6539e716b9153e72aeec69875efb6d10584d701): netplugin failed: "2020/09/09 08:00:20 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-downward-api-379;K8S_POD_NAME=downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7;K8S_POD_INFRA_CONTAINER_ID=2fa31363bc30ea41f1ffff96f6539e716b9153e72aeec69875efb6d10584d701, CNI_NETNS=/var/run/netns/de8ef08f-5801-49d9-909b-88cb1e5c9f07).\n"
Sep 09 08:02:26.370 I ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 container/init1 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:02:27.786 I ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 container/init1 reason/Created
Sep 09 08:02:27.800 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/NotReady
Sep 09 08:02:27.800 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Restarted
Sep 09 08:02:27.910 I ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 container/init1 reason/Started
Sep 09 08:02:27.976 W clusteroperator/network changed Progressing to True: Deploying: Deployment "openshift-kuryr/kuryr-controller" is not available (awaiting 1 nodes)
Sep 09 08:02:28.857 I ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 container/init2 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:02:30.493 I ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 container/init2 reason/Created
Sep 09 08:02:30.621 I ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 container/init2 reason/Started
Sep 09 08:02:30.860 E ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 container/init2 container exited with code 1 (Error): 
Sep 09 08:02:30.860 E ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 08:02:30.860 E ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 container/init2 init container exited with code 1 (Error): 
Sep 09 08:02:31.313 W ns/e2e-container-probe-1434 pod/test-webserver-3cb89145-3f21-4126-b967-2c54052dd548 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.128.177.239:81/": dial tcp 10.128.177.239:81: connect: connection refused
Sep 09 08:02:33.409 I ns/e2e-replicaset-4336 pod/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d-p48tf node/ reason/Created
Sep 09 08:02:33.490 I ns/e2e-replicaset-4336 replicaset/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d reason/SuccessfulCreate Created pod: my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d-p48tf
Sep 09 08:02:33.601 I ns/e2e-replicaset-4336 pod/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d-p48tf node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:02:42.318 W ns/e2e-container-probe-1434 pod/test-webserver-3cb89145-3f21-4126-b967-2c54052dd548 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.128.177.239:81/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (2 times)
Sep 09 08:02:42.985 W ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:02:47.860 W ns/e2e-downward-api-379 pod/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(516e94edb4e0ab465c32730f76aa604513a3c3b7107af42176e43977268c62bd): [e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:02:48.968 - 254s  W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-m4bk9 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:02:50.104 I ns/e2e-pods-7770 pod/pod-logs-websocket-29afede6-7736-4746-ab12-8ee88ea4c51d node/ reason/Created
Sep 09 08:02:50.283 I ns/e2e-pods-7770 pod/pod-logs-websocket-29afede6-7736-4746-ab12-8ee88ea4c51d node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:02:51.241 I ns/e2e-emptydir-9801 pod/pod-3110dbb9-60dd-40b5-993a-907531413494 node/ reason/Created
Sep 09 08:02:51.328 W ns/e2e-container-probe-1434 pod/test-webserver-3cb89145-3f21-4126-b967-2c54052dd548 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.128.177.239:81/": dial tcp 10.128.177.239:81: connect: connection refused (2 times)
Sep 09 08:02:51.342 I ns/e2e-emptydir-9801 pod/pod-3110dbb9-60dd-40b5-993a-907531413494 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:02:55.365 W ns/e2e-container-probe-1434 pod/test-webserver-3cb89145-3f21-4126-b967-2c54052dd548 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:02:57.104 W ns/e2e-container-probe-1434 pod/test-webserver-3cb89145-3f21-4126-b967-2c54052dd548 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:03:03.968 - 30s   W ns/e2e-container-probe-1434 pod/test-webserver-3cb89145-3f21-4126-b967-2c54052dd548 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:03:10.797 W ns/e2e-downward-api-379 pod/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(5771b09ffeac1c0c2405ffb1aa484cf4612525a0ab7d4efd064c578896593a64): [e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:03:32.016 W ns/e2e-init-container-6515 pod/pod-init-9a3d0d5c-1cd9-490d-bc8a-2b91283523c2 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:03:32.877 W ns/e2e-downward-api-379 pod/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(6bc921892cdc30b9bf336e00f5f5ac2c405ac1825bb8e98825779324e0234b20): [e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:03:33.968 - 29s   W ns/e2e-replicaset-4336 pod/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d-p48tf node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:03:35.558 W ns/e2e-container-probe-1434 pod/test-webserver-3cb89145-3f21-4126-b967-2c54052dd548 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:03:35.565 W ns/e2e-projected-1934 pod/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6_e2e-projected-1934_7eb85d6b-cea0-4d02-80ff-3b8ae3cab054_0(90307e4d3a50a03a37a09f0b74dc050aca31e595ccbab5a819bbde3dca979578): netplugin failed: "2020/09/09 08:00:25 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-projected-1934;K8S_POD_NAME=downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6;K8S_POD_INFRA_CONTAINER_ID=90307e4d3a50a03a37a09f0b74dc050aca31e595ccbab5a819bbde3dca979578, CNI_NETNS=/var/run/netns/e514f725-73a2-4439-9717-011ac6f6dad4).\n"
Sep 09 08:03:36.166 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss-0_e2e-statefulset-858_629933ed-7fce-494a-9757-6eeebc14b12d_0(9996689bfd0dcc6265e239c456f80f446f0d11270a9cf96e646182ab6cde7e04): netplugin failed: "2020/09/09 08:00:28 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-statefulset-858;K8S_POD_NAME=ss-0;K8S_POD_INFRA_CONTAINER_ID=9996689bfd0dcc6265e239c456f80f446f0d11270a9cf96e646182ab6cde7e04, CNI_NETNS=/var/run/netns/8a1eb5ab-92b7-40e7-b252-0f035e50159c).\n"
Sep 09 08:03:36.476 I ns/e2e-deployment-9259 pod/test-rolling-update-controller-4b89p node/ reason/Created
Sep 09 08:03:36.553 I ns/e2e-deployment-9259 replicaset/test-rolling-update-controller reason/SuccessfulCreate Created pod: test-rolling-update-controller-4b89p
Sep 09 08:03:36.573 I ns/e2e-deployment-9259 pod/test-rolling-update-controller-4b89p node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:03:40.948 W ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:03:42.321 W ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:03:42.321 W ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/creates-volume-test reason/NotReady
Sep 09 08:03:42.321 W ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/dels-volume-test reason/NotReady
Sep 09 08:03:42.321 W ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 container/upds-volume-test reason/NotReady
Sep 09 08:03:44.234 W ns/e2e-projected-7916 pod/pod-projected-secrets-f7234db6-f09b-41a0-a8dd-eca8ed18cc8e node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:03:46.075 I ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Ready
Sep 09 08:03:46.290 W clusteroperator/network changed Progressing to False
Sep 09 08:03:56.846 W ns/e2e-projected-1934 pod/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6_e2e-projected-1934_7eb85d6b-cea0-4d02-80ff-3b8ae3cab054_0(4d8651bd5047504e7b65de3b48b80ec81c206553855231f89713737ff8158203): [e2e-projected-1934/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:03:56.861 W ns/e2e-downward-api-379 pod/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(22bd832859282a8da699bf99f2581ed77f2e4b02742d1cee37386ed1bd4ccf9a): [e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:03:57.812 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss-0_e2e-statefulset-858_629933ed-7fce-494a-9757-6eeebc14b12d_0(5a90180b5f0874d795e49045035b2be907691ac3184f98ba00cc0ebfcf07b4e3): [e2e-statefulset-858/ss-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:04:00.584 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500
Sep 09 08:04:02.868 W ns/e2e-container-probe-1700 pod/test-webserver-c572a488-855d-482e-9311-1c58c80d0223 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:04:02.894 I ns/e2e-container-probe-1700 pod/test-webserver-c572a488-855d-482e-9311-1c58c80d0223 node/ostest-5xqm8-worker-0-cbbx9 container/test-webserver reason/Killing
Sep 09 08:04:03.968 W ns/e2e-pods-7770 pod/pod-logs-websocket-29afede6-7736-4746-ab12-8ee88ea4c51d node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:04:03.968 W ns/e2e-emptydir-9801 pod/pod-3110dbb9-60dd-40b5-993a-907531413494 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:04:04.148 I ns/e2e-deployment-9259 pod/test-rolling-update-controller-4b89p reason/AddedInterface Add eth0 [10.128.178.207/23]
Sep 09 08:04:04.260 I ns/e2e-deployment-7002 pod/test-rollover-controller-jdmhh node/ reason/Created
Sep 09 08:04:04.300 I ns/e2e-deployment-7002 replicaset/test-rollover-controller reason/SuccessfulCreate Created pod: test-rollover-controller-jdmhh
Sep 09 08:04:04.349 I ns/e2e-deployment-7002 pod/test-rollover-controller-jdmhh node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:04:04.453 W ns/e2e-container-probe-1700 pod/test-webserver-c572a488-855d-482e-9311-1c58c80d0223 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:04:04.453 W ns/e2e-container-probe-1700 pod/test-webserver-c572a488-855d-482e-9311-1c58c80d0223 node/ostest-5xqm8-worker-0-cbbx9 container/test-webserver reason/NotReady
Sep 09 08:04:04.762 W ns/e2e-container-probe-1700 pod/test-webserver-c572a488-855d-482e-9311-1c58c80d0223 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:04:04.935 I ns/e2e-deployment-9259 pod/test-rolling-update-controller-4b89p node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:04:05.248 I ns/e2e-deployment-9259 pod/test-rolling-update-controller-4b89p node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Created
Sep 09 08:04:05.314 I ns/e2e-deployment-9259 pod/test-rolling-update-controller-4b89p node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Started
Sep 09 08:04:05.713 I ns/e2e-deployment-9259 pod/test-rolling-update-controller-4b89p node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Ready
Sep 09 08:04:06.838 W ns/e2e-projected-3633 pod/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4_e2e-projected-3633_90a0d957-1c3d-4123-ad2f-e2e78096cc06_0(ab957495252baace41d284f6c92af55427dae9629fb2ac243dbb9cd8346c8800): netplugin failed: "2020/09/09 08:00:40 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-projected-3633;K8S_POD_NAME=projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4;K8S_POD_INFRA_CONTAINER_ID=ab957495252baace41d284f6c92af55427dae9629fb2ac243dbb9cd8346c8800, CNI_NETNS=/var/run/netns/f4d5efa9-0c39-4f4e-923b-3e0369c53e80).\n"
Sep 09 08:04:07.770 I ns/e2e-deployment-9259 deployment/test-rolling-update-deployment reason/ScalingReplicaSet Scaled up replica set test-rolling-update-deployment-5887db9c6b to 1
Sep 09 08:04:07.811 I ns/e2e-deployment-9259 pod/test-rolling-update-deployment-5887db9c6b-q2wf9 node/ reason/Created
Sep 09 08:04:07.846 I ns/e2e-deployment-9259 replicaset/test-rolling-update-deployment-5887db9c6b reason/SuccessfulCreate Created pod: test-rolling-update-deployment-5887db9c6b-q2wf9
Sep 09 08:04:07.989 I ns/e2e-deployment-9259 pod/test-rolling-update-deployment-5887db9c6b-q2wf9 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:04:10.573 I ns/e2e-emptydir-9801 pod/pod-3110dbb9-60dd-40b5-993a-907531413494 reason/AddedInterface Add eth0 [10.128.140.148/23]
Sep 09 08:04:10.575 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-m4bk9 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_webserver-deployment-dd94f59b7-m4bk9_e2e-deployment-4302_4c3933fd-44cd-4dae-ab7e-778375226540_0(2b01484bcada304fd27b06e911c95faa34032facd1ec7850ce6f53b0cb41b522): netplugin failed: "2020/09/09 08:01:46 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-deployment-4302;K8S_POD_NAME=webserver-deployment-dd94f59b7-m4bk9;K8S_POD_INFRA_CONTAINER_ID=2b01484bcada304fd27b06e911c95faa34032facd1ec7850ce6f53b0cb41b522, CNI_NETNS=/var/run/netns/e1ea1a56-253f-4fc2-8565-ff7baf7be90e).\n"
Sep 09 08:04:10.583 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (2 times)
Sep 09 08:04:10.799 I ns/e2e-replicaset-4336 pod/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d-p48tf reason/AddedInterface Add eth0 [10.128.128.219/23]
Sep 09 08:04:11.256 I ns/e2e-emptydir-9801 pod/pod-3110dbb9-60dd-40b5-993a-907531413494 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:04:11.506 I ns/e2e-replicaset-4336 pod/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d-p48tf node/ostest-5xqm8-worker-0-rzx47 container/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:04:11.536 I ns/e2e-emptydir-9801 pod/pod-3110dbb9-60dd-40b5-993a-907531413494 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Created
Sep 09 08:04:11.589 I ns/e2e-emptydir-9801 pod/pod-3110dbb9-60dd-40b5-993a-907531413494 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Started
Sep 09 08:04:11.765 I ns/e2e-replicaset-4336 pod/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d-p48tf node/ostest-5xqm8-worker-0-rzx47 container/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d reason/Created
Sep 09 08:04:11.838 I ns/e2e-replicaset-4336 pod/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d-p48tf node/ostest-5xqm8-worker-0-rzx47 container/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d reason/Started
Sep 09 08:04:12.341 I ns/e2e-replicaset-4336 pod/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d-p48tf node/ostest-5xqm8-worker-0-rzx47 container/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d reason/Ready
Sep 09 08:04:14.162 W ns/e2e-emptydir-9801 pod/pod-3110dbb9-60dd-40b5-993a-907531413494 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:04:14.376 I ns/e2e-pods-7770 pod/pod-logs-websocket-29afede6-7736-4746-ab12-8ee88ea4c51d reason/AddedInterface Add eth0 [10.128.168.83/23]
Sep 09 08:04:15.064 I ns/e2e-pods-7770 pod/pod-logs-websocket-29afede6-7736-4746-ab12-8ee88ea4c51d node/ostest-5xqm8-worker-0-rzx47 container/main reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:04:15.279 I ns/e2e-pods-7770 pod/pod-logs-websocket-29afede6-7736-4746-ab12-8ee88ea4c51d node/ostest-5xqm8-worker-0-rzx47 container/main reason/Created
Sep 09 08:04:15.367 I ns/e2e-pods-7770 pod/pod-logs-websocket-29afede6-7736-4746-ab12-8ee88ea4c51d node/ostest-5xqm8-worker-0-rzx47 container/main reason/Started
Sep 09 08:04:15.459 I ns/e2e-pods-7770 pod/pod-logs-websocket-29afede6-7736-4746-ab12-8ee88ea4c51d node/ostest-5xqm8-worker-0-rzx47 container/main reason/Ready
Sep 09 08:04:15.550 W ns/e2e-emptydir-9801 pod/pod-3110dbb9-60dd-40b5-993a-907531413494 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:04:18.830 W ns/e2e-downward-api-379 pod/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(d3d506594437ca4ecb5df5dbc00a732221db23f54a897a86c79de0f142a82dd3): [e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:04:19.893 W ns/e2e-projected-1934 pod/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6_e2e-projected-1934_7eb85d6b-cea0-4d02-80ff-3b8ae3cab054_0(fd496ce5bd6da4e9e4cccd7604ae5ea4f9b9387c180399e08fe42b87a2c733ad): [e2e-projected-1934/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:04:20.591 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (3 times)
Sep 09 08:04:21.721 I ns/e2e-kubectl-369 pod/update-demo-nautilus-lrtrv node/ reason/Created
Sep 09 08:04:21.757 I ns/e2e-kubectl-369 replicationcontroller/update-demo-nautilus reason/SuccessfulCreate Created pod: update-demo-nautilus-lrtrv
Sep 09 08:04:21.773 I ns/e2e-kubectl-369 pod/update-demo-nautilus-s9hcx node/ reason/Created
Sep 09 08:04:21.808 I ns/e2e-kubectl-369 replicationcontroller/update-demo-nautilus reason/SuccessfulCreate Created pod: update-demo-nautilus-s9hcx
Sep 09 08:04:21.811 I ns/e2e-kubectl-369 pod/update-demo-nautilus-lrtrv node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:04:21.849 I ns/e2e-kubectl-369 pod/update-demo-nautilus-s9hcx node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:04:22.978 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss-0_e2e-statefulset-858_629933ed-7fce-494a-9757-6eeebc14b12d_0(6f76abba096dd2903583e0d9248dc6c5094fa4638dca16a0ff7989efbe342b0f): [e2e-statefulset-858/ss-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:04:24.923 I ns/e2e-deployment-9259 pod/test-rolling-update-deployment-5887db9c6b-q2wf9 reason/AddedInterface Add eth0 [10.128.179.96/23]
Sep 09 08:04:24.935 I ns/e2e-resourcequota-9988 pod/test-pod node/ reason/Created
Sep 09 08:04:25.072 W ns/e2e-resourcequota-9988 pod/test-pod reason/FailedScheduling 0/6 nodes are available: 6 Insufficient example.com/dongle.
Sep 09 08:04:25.220 W ns/e2e-resourcequota-9988 pod/test-pod reason/FailedScheduling 0/6 nodes are available: 6 Insufficient example.com/dongle.
Sep 09 08:04:25.559 I ns/e2e-deployment-9259 pod/test-rolling-update-deployment-5887db9c6b-q2wf9 node/ostest-5xqm8-worker-0-cbbx9 container/agnhost reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:04:25.848 I ns/e2e-deployment-9259 pod/test-rolling-update-deployment-5887db9c6b-q2wf9 node/ostest-5xqm8-worker-0-cbbx9 container/agnhost reason/Created
Sep 09 08:04:25.939 I ns/e2e-deployment-9259 pod/test-rolling-update-deployment-5887db9c6b-q2wf9 node/ostest-5xqm8-worker-0-cbbx9 container/agnhost reason/Started
Sep 09 08:04:26.365 W ns/e2e-replicaset-4336 pod/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d-p48tf node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:04:26.510 I ns/e2e-deployment-9259 pod/test-rolling-update-deployment-5887db9c6b-q2wf9 node/ostest-5xqm8-worker-0-cbbx9 container/agnhost reason/Ready
Sep 09 08:04:26.604 I ns/e2e-deployment-9259 deployment/test-rolling-update-deployment reason/ScalingReplicaSet Scaled down replica set test-rolling-update-controller to 0
Sep 09 08:04:26.619 W ns/e2e-deployment-9259 pod/test-rolling-update-controller-4b89p node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:04:26.646 I ns/e2e-deployment-9259 pod/test-rolling-update-controller-4b89p node/ostest-5xqm8-worker-0-twrlr container/httpd reason/Killing
Sep 09 08:04:26.666 I ns/e2e-deployment-9259 replicaset/test-rolling-update-controller reason/SuccessfulDelete Deleted pod: test-rolling-update-controller-4b89p
Sep 09 08:04:27.767 W ns/e2e-replicaset-4336 pod/my-hostname-basic-5a96fef3-1a37-45c3-bf77-f1fa6d42908d-p48tf node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:04:27.901 W ns/e2e-resourcequota-9988 pod/test-pod reason/FailedScheduling 0/6 nodes are available: 6 Insufficient example.com/dongle.
Sep 09 08:04:27.938 W ns/e2e-deployment-9259 pod/test-rolling-update-controller-4b89p node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:04:27.938 W ns/e2e-deployment-9259 pod/test-rolling-update-controller-4b89p node/ostest-5xqm8-worker-0-twrlr container/httpd reason/NotReady
Sep 09 08:04:28.298 W ns/e2e-deployment-9259 pod/test-rolling-update-controller-4b89p node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:04:28.397 W ns/e2e-pods-7770 pod/pod-logs-websocket-29afede6-7736-4746-ab12-8ee88ea4c51d node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:04:29.075 W ns/e2e-resourcequota-9988 pod/test-pod node/ reason/GracefulDelete in 0s
Sep 09 08:04:29.086 W ns/e2e-resourcequota-9988 pod/test-pod node/ reason/Deleted
Sep 09 08:04:29.281 W ns/e2e-resourcequota-9988 pod/test-pod reason/FailedScheduling skip schedule deleting pod: e2e-resourcequota-9988/test-pod
Sep 09 08:04:30.702 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (4 times)
Sep 09 08:04:31.566 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-m4bk9 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_webserver-deployment-dd94f59b7-m4bk9_e2e-deployment-4302_4c3933fd-44cd-4dae-ab7e-778375226540_0(a01f54c3a137b6f0187c6e2c7764b20a3bb33d1e56673b7ef9bffe3cbaf878da): [e2e-deployment-4302/webserver-deployment-dd94f59b7-m4bk9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:04:32.880 W ns/e2e-projected-3633 pod/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4_e2e-projected-3633_90a0d957-1c3d-4123-ad2f-e2e78096cc06_0(2739a2149adbe96455dddc0796c2fcac9db48c9d40be55e323372262352e90f0): [e2e-projected-3633/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:04:34.145 I ns/e2e-deployment-7002 pod/test-rollover-controller-jdmhh node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:04:34.403 I ns/e2e-deployment-7002 pod/test-rollover-controller-jdmhh node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Created
Sep 09 08:04:34.475 I ns/e2e-deployment-7002 pod/test-rollover-controller-jdmhh node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Started
Sep 09 08:04:34.572 I ns/e2e-deployment-7002 pod/test-rollover-controller-jdmhh node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Ready
Sep 09 08:04:34.908 I ns/e2e-replication-controller-9309 pod/my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94-f9h72 node/ reason/Created
Sep 09 08:04:34.974 I ns/e2e-replication-controller-9309 replicationcontroller/my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94 reason/SuccessfulCreate Created pod: my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94-f9h72
Sep 09 08:04:35.126 I ns/e2e-replication-controller-9309 pod/my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94-f9h72 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:04:38.011 I ns/e2e-deployment-7002 deployment/test-rollover-deployment reason/ScalingReplicaSet Scaled up replica set test-rollover-deployment-78bc8b888c to 1
Sep 09 08:04:38.080 I ns/e2e-deployment-7002 pod/test-rollover-deployment-78bc8b888c-f6wpq node/ reason/Created
Sep 09 08:04:38.137 I ns/e2e-deployment-7002 replicaset/test-rollover-deployment-78bc8b888c reason/SuccessfulCreate Created pod: test-rollover-deployment-78bc8b888c-f6wpq
Sep 09 08:04:38.216 I ns/e2e-deployment-7002 pod/test-rollover-deployment-78bc8b888c-f6wpq node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:04:39.679 W ns/e2e-deployment-9259 pod/test-rolling-update-deployment-5887db9c6b-q2wf9 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:04:40.029 I ns/e2e-deployment-7002 deployment/test-rollover-deployment reason/ScalingReplicaSet Scaled down replica set test-rollover-deployment-78bc8b888c to 0
Sep 09 08:04:40.032 W ns/e2e-deployment-7002 pod/test-rollover-deployment-78bc8b888c-f6wpq node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:04:40.049 W ns/e2e-deployment-7002 pod/test-rollover-deployment-78bc8b888c-f6wpq node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:04:40.067 I ns/e2e-deployment-7002 replicaset/test-rollover-deployment-78bc8b888c reason/SuccessfulDelete Deleted pod: test-rollover-deployment-78bc8b888c-f6wpq
Sep 09 08:04:40.129 I ns/e2e-deployment-7002 deployment/test-rollover-deployment reason/ScalingReplicaSet Scaled up replica set test-rollover-deployment-6f68b9c6f9 to 1
Sep 09 08:04:40.163 I ns/e2e-deployment-7002 pod/test-rollover-deployment-6f68b9c6f9-fmh6z node/ reason/Created
Sep 09 08:04:40.247 I ns/e2e-deployment-7002 replicaset/test-rollover-deployment-6f68b9c6f9 reason/SuccessfulCreate Created pod: test-rollover-deployment-6f68b9c6f9-fmh6z
Sep 09 08:04:40.296 I ns/e2e-deployment-7002 pod/test-rollover-deployment-6f68b9c6f9-fmh6z node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:04:40.603 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (5 times)
Sep 09 08:04:40.628 E ns/e2e-deployment-9259 pod/test-rolling-update-deployment-5887db9c6b-q2wf9 node/ostest-5xqm8-worker-0-cbbx9 container/agnhost container exited with code 2 (Error): 
Sep 09 08:04:41.147 W ns/e2e-downward-api-379 pod/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(ea0276663369634a63daf642f46c387e2eaed45000a958a3347dabf39d211625): [e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:04:43.830 W ns/e2e-projected-1934 pod/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6_e2e-projected-1934_7eb85d6b-cea0-4d02-80ff-3b8ae3cab054_0(9c99e60384cde3cadabe92f877c1187d03f74b9e1dec2043ef15deaccb0b5179): [e2e-projected-1934/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:04:46.947 W ns/e2e-deployment-9259 pod/test-rolling-update-deployment-5887db9c6b-q2wf9 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:04:48.906 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss-0_e2e-statefulset-858_629933ed-7fce-494a-9757-6eeebc14b12d_0(562e1863640d65bc015a429c1b8893babd9f0609d97c8256a800f105ab4ec9c6): [e2e-statefulset-858/ss-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:04:50.283 I ns/e2e-deployment-7002 pod/test-rollover-deployment-6f68b9c6f9-fmh6z reason/AddedInterface Add eth0 [10.128.144.153/23]
Sep 09 08:04:50.594 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (6 times)
Sep 09 08:04:50.948 I ns/e2e-deployment-7002 pod/test-rollover-deployment-6f68b9c6f9-fmh6z node/ostest-5xqm8-worker-0-cbbx9 container/agnhost reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:04:51.249 I ns/e2e-deployment-7002 pod/test-rollover-deployment-6f68b9c6f9-fmh6z node/ostest-5xqm8-worker-0-cbbx9 container/agnhost reason/Created
Sep 09 08:04:51.297 I ns/e2e-deployment-7002 pod/test-rollover-deployment-6f68b9c6f9-fmh6z node/ostest-5xqm8-worker-0-cbbx9 container/agnhost reason/Started
Sep 09 08:04:51.628 I ns/e2e-deployment-7002 pod/test-rollover-deployment-6f68b9c6f9-fmh6z node/ostest-5xqm8-worker-0-cbbx9 container/agnhost reason/Ready
Sep 09 08:04:54.956 W ns/e2e-projected-3633 pod/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4_e2e-projected-3633_90a0d957-1c3d-4123-ad2f-e2e78096cc06_0(01725b7aab02ba0b27775da2703b9e804934cdd8ee2ff6337bf57548cb0af877): [e2e-projected-3633/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:04:55.505 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-m4bk9 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_webserver-deployment-dd94f59b7-m4bk9_e2e-deployment-4302_4c3933fd-44cd-4dae-ab7e-778375226540_0(df8f00b3c3ac46f5ccc0346322d8947c1332130de1afe485ba3baf3530629d42): [e2e-deployment-4302/webserver-deployment-dd94f59b7-m4bk9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:04:59.623 E ns/e2e-pods-7770 pod/pod-logs-websocket-29afede6-7736-4746-ab12-8ee88ea4c51d node/ostest-5xqm8-worker-0-rzx47 container/main container exited with code 137 (Error): 
Sep 09 08:05:00.641 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (7 times)
Sep 09 08:05:01.799 W ns/e2e-deployment-7002 pod/test-rollover-controller-jdmhh node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:05:01.806 I ns/e2e-deployment-7002 deployment/test-rollover-deployment reason/ScalingReplicaSet Scaled down replica set test-rollover-controller to 0
Sep 09 08:05:01.831 I ns/e2e-deployment-7002 pod/test-rollover-controller-jdmhh node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Killing
Sep 09 08:05:01.831 I ns/e2e-deployment-7002 replicaset/test-rollover-controller reason/SuccessfulDelete Deleted pod: test-rollover-controller-jdmhh
Sep 09 08:05:04.062 I ns/e2e-projected-4386 pod/pod-projected-secrets-f18d461c-b6d7-44ff-b2e0-8e7b5e2d6a42 node/ reason/Created
Sep 09 08:05:04.181 I ns/e2e-projected-4386 pod/pod-projected-secrets-f18d461c-b6d7-44ff-b2e0-8e7b5e2d6a42 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:05:05.138 W ns/e2e-deployment-7002 pod/test-rollover-controller-jdmhh node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:05:05.982 W ns/e2e-downward-api-379 pod/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7_e2e-downward-api-379_276c5d3e-10a2-40f0-9481-1837f9001fdf_0(3312d8ef1c4439eca0a07022c484292d221c9f55ffcc14c93a064ed0f74c43fa): [e2e-downward-api-379/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:05:06.902 W ns/e2e-projected-1934 pod/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6_e2e-projected-1934_7eb85d6b-cea0-4d02-80ff-3b8ae3cab054_0(2124f3f4300f1b54e1efefa85751d42b8d2417b24c0583c5894b6b7cd85a052d): [e2e-projected-1934/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:05:07.232 W ns/e2e-pods-7770 pod/pod-logs-websocket-29afede6-7736-4746-ab12-8ee88ea4c51d node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:05:07.811 I ns/e2e-kubectl-369 pod/update-demo-nautilus-lrtrv reason/AddedInterface Add eth0 [10.128.146.146/23]
Sep 09 08:05:08.279 W ns/e2e-deployment-7002 pod/test-rollover-deployment-6f68b9c6f9-fmh6z node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:05:08.397 I ns/e2e-kubectl-369 pod/update-demo-nautilus-lrtrv node/ostest-5xqm8-worker-0-cbbx9 container/update-demo reason/Pulling image/gcr.io/kubernetes-e2e-test-images/nautilus:1.0
Sep 09 08:05:09.756 W ns/e2e-deployment-7002 pod/test-rollover-deployment-6f68b9c6f9-fmh6z node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:05:09.756 W ns/e2e-deployment-7002 pod/test-rollover-deployment-6f68b9c6f9-fmh6z node/ostest-5xqm8-worker-0-cbbx9 container/agnhost reason/NotReady
Sep 09 08:05:10.551 I ns/e2e-kubectl-369 pod/update-demo-nautilus-s9hcx reason/AddedInterface Add eth0 [10.128.147.177/23]
Sep 09 08:05:10.598 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (8 times)
Sep 09 08:05:11.657 I ns/e2e-kubectl-369 pod/update-demo-nautilus-s9hcx node/ostest-5xqm8-worker-0-twrlr container/update-demo reason/Pulling image/gcr.io/kubernetes-e2e-test-images/nautilus:1.0
Sep 09 08:05:11.706 W ns/e2e-deployment-7002 pod/test-rollover-deployment-6f68b9c6f9-fmh6z node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:05:12.053 I ns/e2e-replication-controller-9309 pod/my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94-f9h72 reason/AddedInterface Add eth0 [10.128.122.24/23]
Sep 09 08:05:13.332 I ns/e2e-replication-controller-9309 pod/my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94-f9h72 node/ostest-5xqm8-worker-0-cbbx9 container/my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94 reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:05:13.363 I ns/e2e-kubectl-369 pod/update-demo-nautilus-lrtrv node/ostest-5xqm8-worker-0-cbbx9 container/update-demo reason/Pulled image/gcr.io/kubernetes-e2e-test-images/nautilus:1.0
Sep 09 08:05:13.752 I ns/e2e-replication-controller-9309 pod/my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94-f9h72 node/ostest-5xqm8-worker-0-cbbx9 container/my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94 reason/Created
Sep 09 08:05:13.772 I ns/e2e-kubectl-369 pod/update-demo-nautilus-lrtrv node/ostest-5xqm8-worker-0-cbbx9 container/update-demo reason/Created
Sep 09 08:05:13.787 I ns/e2e-replication-controller-9309 pod/my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94-f9h72 node/ostest-5xqm8-worker-0-cbbx9 container/my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94 reason/Started
Sep 09 08:05:13.855 I ns/e2e-kubectl-369 pod/update-demo-nautilus-lrtrv node/ostest-5xqm8-worker-0-cbbx9 container/update-demo reason/Started
Sep 09 08:05:13.866 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss-0_e2e-statefulset-858_629933ed-7fce-494a-9757-6eeebc14b12d_0(14803556476bb6d4a4cac5abb35a71e1d2d5d339601e20b101bea6130e3b1051): [e2e-statefulset-858/ss-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:05:14.739 I ns/e2e-kubectl-369 pod/update-demo-nautilus-lrtrv node/ostest-5xqm8-worker-0-cbbx9 container/update-demo reason/Ready
Sep 09 08:05:14.772 I ns/e2e-replication-controller-9309 pod/my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94-f9h72 node/ostest-5xqm8-worker-0-cbbx9 container/my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94 reason/Ready
Sep 09 08:05:16.387 I ns/e2e-kubectl-369 pod/update-demo-nautilus-s9hcx node/ostest-5xqm8-worker-0-twrlr container/update-demo reason/Pulled image/gcr.io/kubernetes-e2e-test-images/nautilus:1.0
Sep 09 08:05:16.744 I ns/e2e-kubectl-369 pod/update-demo-nautilus-s9hcx node/ostest-5xqm8-worker-0-twrlr container/update-demo reason/Created
Sep 09 08:05:16.829 I ns/e2e-kubectl-369 pod/update-demo-nautilus-s9hcx node/ostest-5xqm8-worker-0-twrlr container/update-demo reason/Started
Sep 09 08:05:16.970 W ns/e2e-projected-3633 pod/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4_e2e-projected-3633_90a0d957-1c3d-4123-ad2f-e2e78096cc06_0(c281da968153fc520570f7116626f0135ebcf8dab947983fde8e7968b4469334): [e2e-projected-3633/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:05:17.273 I ns/e2e-kubectl-369 pod/update-demo-nautilus-s9hcx node/ostest-5xqm8-worker-0-twrlr container/update-demo reason/Ready
Sep 09 08:05:18.181 W ns/e2e-kubectl-369 pod/update-demo-nautilus-lrtrv node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:05:18.220 W ns/e2e-kubectl-369 pod/update-demo-nautilus-s9hcx node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:05:18.294 I ns/e2e-kubectl-369 pod/update-demo-nautilus-lrtrv node/ostest-5xqm8-worker-0-cbbx9 container/update-demo reason/Killing
Sep 09 08:05:19.789 W ns/e2e-kubectl-369 pod/update-demo-nautilus-lrtrv node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:05:19.789 W ns/e2e-kubectl-369 pod/update-demo-nautilus-lrtrv node/ostest-5xqm8-worker-0-cbbx9 container/update-demo reason/NotReady
Sep 09 08:05:20.175 I ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 node/ reason/Created
Sep 09 08:05:20.300 I ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:05:20.590 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (9 times)
Sep 09 08:05:20.591 W ns/e2e-downward-api-379 pod/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:05:21.319 W ns/e2e-kubectl-369 pod/update-demo-nautilus-s9hcx node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:05:21.319 W ns/e2e-kubectl-369 pod/update-demo-nautilus-s9hcx node/ostest-5xqm8-worker-0-twrlr container/update-demo reason/NotReady
Sep 09 08:05:21.473 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-m4bk9 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_webserver-deployment-dd94f59b7-m4bk9_e2e-deployment-4302_4c3933fd-44cd-4dae-ab7e-778375226540_0(7db01353d67f63fd5332ccce5dda4fa33e503d1a3302ecb4a7d72a0267cd435c): [e2e-deployment-4302/webserver-deployment-dd94f59b7-m4bk9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:05:22.026 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ reason/Created
Sep 09 08:05:22.092 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:05:25.843 W ns/e2e-projected-1934 pod/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:05:26.793 W ns/e2e-kubectl-369 pod/update-demo-nautilus-lrtrv node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:05:27.277 W ns/e2e-downward-api-379 pod/downwardapi-volume-4219f5f8-e9af-4afe-86a2-c5030f56eea7 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:05:27.824 W ns/e2e-replication-controller-9309 pod/my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94-f9h72 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:05:28.494 I ns/e2e-projected-4386 pod/pod-projected-secrets-f18d461c-b6d7-44ff-b2e0-8e7b5e2d6a42 reason/AddedInterface Add eth0 [10.128.120.76/23]
Sep 09 04:05:28.752 I test="[sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" failed
Sep 09 08:05:29.224 I ns/e2e-projected-4386 pod/pod-projected-secrets-f18d461c-b6d7-44ff-b2e0-8e7b5e2d6a42 node/ostest-5xqm8-worker-0-cbbx9 container/projected-secret-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:05:29.366 W ns/e2e-replication-controller-9309 pod/my-hostname-basic-68c35853-d375-4490-9e66-3a81006edd94-f9h72 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:05:29.908 I ns/e2e-projected-4386 pod/pod-projected-secrets-f18d461c-b6d7-44ff-b2e0-8e7b5e2d6a42 node/ostest-5xqm8-worker-0-cbbx9 container/projected-secret-volume-test reason/Created
Sep 09 08:05:30.166 I ns/e2e-projected-4386 pod/pod-projected-secrets-f18d461c-b6d7-44ff-b2e0-8e7b5e2d6a42 node/ostest-5xqm8-worker-0-cbbx9 container/projected-secret-volume-test reason/Started
Sep 09 08:05:30.751 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (10 times)
Sep 09 08:05:30.799 I ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/Killing
Sep 09 08:05:31.423 W ns/openshift-operator-lifecycle-manager pod/packageserver-6bb6556b69-jpnn8 node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "https://10.128.5.10:5443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (2 times)
Sep 09 08:05:31.465 I ns/e2e-subpath-4862 pod/pod-subpath-test-downwardapi-clmj node/ reason/Created
Sep 09 08:05:31.574 I ns/e2e-subpath-4862 pod/pod-subpath-test-downwardapi-clmj node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:05:31.616 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: Get "http://10.196.1.181:8090/ready": dial tcp 10.196.1.181:8090: connect: connection refused
Sep 09 08:05:32.404 W ns/e2e-kubectl-369 pod/update-demo-nautilus-s9hcx node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:05:32.430 W ns/e2e-projected-4386 pod/pod-projected-secrets-f18d461c-b6d7-44ff-b2e0-8e7b5e2d6a42 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:05:33.856 W ns/e2e-projected-4386 pod/pod-projected-secrets-f18d461c-b6d7-44ff-b2e0-8e7b5e2d6a42 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:05:36.604 I ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f5e70df5263a2a421c4e819534a8ed890a331a1306c851cfe749af494180126
Sep 09 08:05:36.708 W ns/e2e-projected-1934 pod/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6_e2e-projected-1934_7eb85d6b-cea0-4d02-80ff-3b8ae3cab054_0(5ede76f4afeb53542a7749d79b0608c163d4f4d5d95a55220b9015b48aa9248f): [e2e-projected-1934/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:05:36.750 W ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-59251d12-b410-44a3-abec-49441d654aa4_e2e-dns-7789_e35e0398-0cf1-4425-bed7-686d84358f01_0(b7295210c6051c429b73f0b48b2a0a2ca2dc8dd109a3196943d2b23e0b9cc156): [e2e-dns-7789/dns-test-59251d12-b410-44a3-abec-49441d654aa4:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:05:36.780 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss-0_e2e-statefulset-858_629933ed-7fce-494a-9757-6eeebc14b12d_0(9f6b60243fe221afc785c3b647a1008359330cb4686059586f0f6965b482b050): [e2e-statefulset-858/ss-0:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep 09 08:05:36.812 W ns/e2e-projected-3633 pod/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4_e2e-projected-3633_90a0d957-1c3d-4123-ad2f-e2e78096cc06_0(819c0db635d2db87c174e76bc727a44c3ed20002c6a35d4b21d3a5134198f8a9): [e2e-projected-3633/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep 09 08:05:37.018 I ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/Created
Sep 09 08:05:37.063 I ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/Started
Sep 09 08:05:37.316 W ns/e2e-projected-1934 pod/downwardapi-volume-4fe29c04-5da7-4f7d-b490-528c7f5124c6 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:05:37.725 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/NotReady
Sep 09 08:05:37.725 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/Restarted
Sep 09 04:05:37.980 I test="[sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" failed
Sep 09 08:05:38.088 W clusteroperator/network changed Progressing to True: Deploying: DaemonSet "openshift-kuryr/kuryr-cni" is not available (awaiting 1 nodes)
Sep 09 08:05:41.075 W ns/e2e-projected-3633 pod/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:05:41.270 I ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 reason/AddedInterface Add eth0 [10.128.137.184/23]
Sep 09 08:05:41.974 I ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 node/ostest-5xqm8-worker-0-cbbx9 container/busybox reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:05:42.245 I ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 node/ostest-5xqm8-worker-0-cbbx9 container/busybox reason/Created
Sep 09 08:05:42.283 I ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 node/ostest-5xqm8-worker-0-cbbx9 container/busybox reason/Started
Sep 09 08:05:42.909 I ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 node/ostest-5xqm8-worker-0-cbbx9 container/busybox reason/Ready
Sep 09 08:05:43.398 I ns/e2e-downward-api-5033 pod/downwardapi-volume-150f1226-f8ec-4042-9a95-8e39f58c3af5 node/ reason/Created
Sep 09 08:05:43.434 I ns/e2e-downward-api-5033 pod/downwardapi-volume-150f1226-f8ec-4042-9a95-8e39f58c3af5 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:05:43.641 I ns/e2e-resourcequota-2945 pod/pfpod node/ reason/Created
Sep 09 08:05:43.705 W ns/e2e-resourcequota-2945 pod/pfpod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 08:05:43.854 W ns/e2e-resourcequota-2945 pod/pfpod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 08:05:45.516 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-m4bk9 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_webserver-deployment-dd94f59b7-m4bk9_e2e-deployment-4302_4c3933fd-44cd-4dae-ab7e-778375226540_0(28369e54354e9bb8aa7f8203f5660a476ec460254ac9c52ece47b0bdf57ff774): [e2e-deployment-4302/webserver-deployment-dd94f59b7-m4bk9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:05:47.212 W ns/openshift-marketplace pod/community-operators-fdsfm node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:05:47.213 W ns/openshift-marketplace pod/certified-operators-kgjmd node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:05:47.255 I ns/openshift-marketplace pod/certified-operators-kgjmd node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:05:47.255 I ns/openshift-marketplace pod/community-operators-fdsfm node/ostest-5xqm8-worker-0-rzx47 container/registry-server reason/Killing
Sep 09 08:05:47.308 W ns/e2e-projected-3633 pod/projected-volume-5ab0ceb8-5cad-4229-98d9-e0acfd09a2f4 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:05:47.339 I ns/openshift-marketplace pod/certified-operators-f47gq node/ reason/Created
Sep 09 08:05:47.339 I ns/openshift-marketplace pod/community-operators-sh97c node/ reason/Created
Sep 09 08:05:47.462 W ns/e2e-resourcequota-2945 pod/pfpod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 08:05:47.475 W ns/openshift-marketplace pod/redhat-marketplace-8899r node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:05:47.546 I ns/openshift-marketplace pod/redhat-marketplace-8899r node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:05:47.615 I ns/openshift-marketplace pod/certified-operators-f47gq node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:05:47.615 I ns/openshift-marketplace pod/community-operators-sh97c node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:05:47.640 W ns/openshift-marketplace pod/redhat-operators-sc5nz node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:05:47.760 I ns/openshift-marketplace pod/redhat-operators-sc5nz node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:05:47.825 I ns/openshift-marketplace pod/redhat-marketplace-mwrsg node/ reason/Created
Sep 09 08:05:47.937 I ns/openshift-marketplace pod/redhat-operators-6w75w node/ reason/Created
Sep 09 08:05:48.057 I ns/openshift-marketplace pod/redhat-marketplace-mwrsg node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:05:48.851 W ns/openshift-marketplace pod/community-operators-fdsfm node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:05:48.851 W ns/openshift-marketplace pod/community-operators-fdsfm node/ostest-5xqm8-worker-0-rzx47 container/registry-server reason/NotReady
Sep 09 08:05:48.968 W ns/openshift-marketplace pod/community-operators-fdsfm node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:05:49.128 W ns/e2e-resourcequota-2945 pod/pfpod node/ reason/GracefulDelete in 0s
Sep 09 08:05:49.269 W ns/e2e-resourcequota-2945 pod/pfpod node/ reason/Deleted
Sep 09 08:05:49.334 W ns/e2e-resourcequota-2945 pod/pfpod reason/FailedScheduling skip schedule deleting pod: e2e-resourcequota-2945/pfpod
Sep 09 08:05:49.417 I ns/openshift-marketplace pod/redhat-operators-6w75w node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 04:05:49.740 I test="[sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" failed
Sep 09 08:05:49.871 W ns/openshift-marketplace pod/redhat-marketplace-8899r node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe errored: rpc error: code = NotFound desc = could not find container "e42b85a2cbe828c0bbb0c932661a74d42735740f074b52412df9835884702159": container with ID starting with e42b85a2cbe828c0bbb0c932661a74d42735740f074b52412df9835884702159 not found: ID does not exist
Sep 09 08:05:49.977 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 reason/AddedInterface Add eth0 [10.128.157.9/23]
Sep 09 08:05:50.356 W ns/openshift-marketplace pod/redhat-operators-sc5nz node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:05:50.356 W ns/openshift-marketplace pod/redhat-operators-sc5nz node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:05:50.439 W ns/openshift-marketplace pod/certified-operators-kgjmd node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:05:50.439 W ns/openshift-marketplace pod/certified-operators-kgjmd node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:05:50.637 W ns/openshift-marketplace pod/redhat-marketplace-8899r node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:05:50.637 W ns/openshift-marketplace pod/redhat-marketplace-8899r node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:05:50.759 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:05:50.794 W ns/openshift-marketplace pod/community-operators-fdsfm node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:05:50.932 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Created
Sep 09 08:05:50.972 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Started
Sep 09 08:05:51.052 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/querier reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:05:51.306 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/querier reason/Created
Sep 09 08:05:51.369 W ns/openshift-marketplace pod/certified-operators-kgjmd node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:05:51.392 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/querier reason/Started
Sep 09 08:05:51.438 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier reason/Pulled image/gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0
Sep 09 08:05:51.643 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier reason/Created
Sep 09 08:05:51.690 I ns/e2e-resourcequota-2945 pod/burstable-pod node/ reason/Created
Sep 09 08:05:51.716 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier reason/Started
Sep 09 08:05:51.817 W ns/e2e-resourcequota-2945 pod/burstable-pod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 08:05:51.914 I ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/Ready
Sep 09 08:05:52.084 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/querier reason/Ready
Sep 09 08:05:52.084 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier reason/Ready
Sep 09 08:05:52.084 I ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:05:52.089 W ns/e2e-resourcequota-2945 pod/burstable-pod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 08:05:52.104 I ns/e2e-statefulset-858 pod/ss-0 reason/AddedInterface Add eth0 [10.128.131.78/23]
Sep 09 08:05:52.122 W ns/openshift-marketplace pod/redhat-marketplace-8899r node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:05:52.337 W ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:05:52.355 W clusteroperator/network changed Progressing to False
Sep 09 08:05:52.501 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (14 times)
Sep 09 08:05:52.574 W ns/openshift-marketplace pod/redhat-operators-sc5nz node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:05:52.918 I ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:05:53.249 I ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Created
Sep 09 08:05:53.338 I ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Started
Sep 09 08:05:53.921 I ns/e2e-webhook-2407 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 08:05:54.063 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ reason/Created
Sep 09 08:05:54.133 I ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:05:54.218 I ns/e2e-webhook-2407 pod/sample-webhook-deployment-7bc8486f8c-qzpxl node/ reason/Created
Sep 09 08:05:54.230 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:05:54.258 W ns/e2e-resourcequota-2945 pod/burstable-pod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 08:05:54.334 I ns/e2e-webhook-2407 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-qzpxl
Sep 09 08:05:54.356 I ns/e2e-webhook-2407 pod/sample-webhook-deployment-7bc8486f8c-qzpxl node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:05:54.671 I ns/openshift-marketplace pod/certified-operators-f47gq reason/AddedInterface Add eth0 [10.128.3.52/23]
Sep 09 08:05:54.828 I ns/openshift-marketplace pod/redhat-marketplace-mwrsg reason/AddedInterface Add eth0 [10.128.2.211/23]
Sep 09 08:05:54.999 I ns/openshift-marketplace pod/community-operators-sh97c reason/AddedInterface Add eth0 [10.128.2.26/23]
Sep 09 08:05:55.550 I ns/openshift-marketplace pod/certified-operators-f47gq node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/certified-operator-index:v4.6
Sep 09 08:05:55.829 I ns/openshift-marketplace pod/redhat-marketplace-mwrsg node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-marketplace-index:v4.6
Sep 09 08:05:56.100 I ns/openshift-marketplace pod/community-operators-sh97c node/ostest-5xqm8-worker-0-rzx47 container/registry-server reason/Pulling image/registry.redhat.io/redhat/community-operator-index:latest
Sep 09 08:05:56.149 W ns/e2e-resourcequota-2945 pod/burstable-pod node/ reason/GracefulDelete in 0s
Sep 09 08:05:56.264 W ns/e2e-resourcequota-2945 pod/burstable-pod node/ reason/Deleted
Sep 09 08:05:56.298 W ns/e2e-resourcequota-2945 pod/burstable-pod reason/FailedScheduling skip schedule deleting pod: e2e-resourcequota-2945/burstable-pod
Sep 09 08:05:56.560 I ns/e2e-crd-webhook-3873 deployment/sample-crd-conversion-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-crd-conversion-webhook-deployment-84c84cf5f9 to 1
Sep 09 08:05:56.710 I ns/e2e-crd-webhook-3873 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-797lf node/ reason/Created
Sep 09 08:05:56.764 I ns/e2e-crd-webhook-3873 replicaset/sample-crd-conversion-webhook-deployment-84c84cf5f9 reason/SuccessfulCreate Created pod: sample-crd-conversion-webhook-deployment-84c84cf5f9-797lf
Sep 09 08:05:56.817 E ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/querier container exited with code 137 (Error): 
Sep 09 08:05:56.817 E ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier container exited with code 137 (Error): 
Sep 09 08:05:56.817 E ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 container/webserver container exited with code 2 (Error): 
Sep 09 08:05:56.845 I ns/e2e-crd-webhook-3873 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-797lf node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:05:57.762 W ns/e2e-dns-7789 pod/dns-test-59251d12-b410-44a3-abec-49441d654aa4 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:05:58.464 I ns/openshift-marketplace pod/redhat-operators-6w75w reason/AddedInterface Add eth0 [10.128.2.149/23]
Sep 09 08:05:58.548 I ns/openshift-marketplace pod/certified-operators-f47gq node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/certified-operator-index:v4.6
Sep 09 08:05:58.840 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404
Sep 09 08:05:58.871 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/NotReady
Sep 09 08:05:59.048 I ns/openshift-marketplace pod/certified-operators-f47gq node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:05:59.070 I ns/openshift-marketplace pod/redhat-marketplace-mwrsg node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/redhat-marketplace-index:v4.6
Sep 09 08:05:59.121 I ns/openshift-marketplace pod/certified-operators-f47gq node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:05:59.463 I ns/openshift-marketplace pod/redhat-marketplace-mwrsg node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:05:59.499 W ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: cat: can't open '/tmp/health': No such file or directory\n
Sep 09 08:05:59.520 I ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 node/ostest-5xqm8-worker-0-cbbx9 container/busybox reason/Killing
Sep 09 08:05:59.566 I ns/e2e-replicaset-6896 pod/pod-adoption-release node/ reason/Created
Sep 09 08:05:59.572 I ns/openshift-marketplace pod/redhat-operators-6w75w node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-operator-index:v4.6
Sep 09 08:05:59.621 I ns/openshift-marketplace pod/redhat-marketplace-mwrsg node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:05:59.692 I ns/e2e-replicaset-6896 pod/pod-adoption-release node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:05:59.853 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (2 times)
Sep 09 08:05:59.958 I ns/e2e-subpath-4862 pod/pod-subpath-test-downwardapi-clmj reason/AddedInterface Add eth0 [10.128.154.106/23]
Sep 09 08:06:00.711 I ns/e2e-subpath-4862 pod/pod-subpath-test-downwardapi-clmj node/ostest-5xqm8-worker-0-cbbx9 container/test-container-subpath-downwardapi-clmj reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:06:00.853 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (3 times)
Sep 09 08:06:01.100 I ns/e2e-subpath-4862 pod/pod-subpath-test-downwardapi-clmj node/ostest-5xqm8-worker-0-cbbx9 container/test-container-subpath-downwardapi-clmj reason/Created
Sep 09 08:06:01.190 I ns/e2e-subpath-4862 pod/pod-subpath-test-downwardapi-clmj node/ostest-5xqm8-worker-0-cbbx9 container/test-container-subpath-downwardapi-clmj reason/Started
Sep 09 08:06:01.441 I ns/e2e-subpath-4862 pod/pod-subpath-test-downwardapi-clmj node/ostest-5xqm8-worker-0-cbbx9 container/test-container-subpath-downwardapi-clmj reason/Ready
Sep 09 08:06:01.845 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (4 times)
Sep 09 08:06:02.357 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (15 times)
Sep 09 08:06:02.414 I ns/openshift-marketplace pod/redhat-operators-6w75w node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/redhat-operator-index:v4.6
Sep 09 08:06:02.763 I ns/openshift-marketplace pod/redhat-operators-6w75w node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:06:02.851 I ns/openshift-marketplace pod/redhat-operators-6w75w node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:06:02.851 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (5 times)
Sep 09 08:06:03.925 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (6 times)
Sep 09 08:06:04.860 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (7 times)
Sep 09 08:06:04.899 I ns/openshift-marketplace pod/community-operators-sh97c node/ostest-5xqm8-worker-0-rzx47 container/registry-server reason/Pulled image/registry.redhat.io/redhat/community-operator-index:latest
Sep 09 08:06:05.208 I ns/openshift-marketplace pod/community-operators-sh97c node/ostest-5xqm8-worker-0-rzx47 container/registry-server reason/Created
Sep 09 08:06:05.276 I ns/openshift-marketplace pod/community-operators-sh97c node/ostest-5xqm8-worker-0-rzx47 container/registry-server reason/Started
Sep 09 08:06:05.892 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (8 times)
Sep 09 08:06:06.861 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (9 times)
Sep 09 08:06:07.854 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (10 times)
Sep 09 08:06:08.844 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (11 times)
Sep 09 08:06:09.786 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-m4bk9 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_webserver-deployment-dd94f59b7-m4bk9_e2e-deployment-4302_4c3933fd-44cd-4dae-ab7e-778375226540_0(ab4dfd7ec4a8f8971ed1e708ca74aa43fdabf9bfae8a5c32343758843dff7fd1): [e2e-deployment-4302/webserver-deployment-dd94f59b7-m4bk9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:06:09.852 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (12 times)
Sep 09 08:06:10.855 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (13 times)
Sep 09 08:06:10.987 I ns/openshift-marketplace pod/certified-operators-f47gq node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:06:11.941 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (14 times)
Sep 09 08:06:12.411 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (16 times)
Sep 09 08:06:12.781 I ns/e2e-downward-api-5033 pod/downwardapi-volume-150f1226-f8ec-4042-9a95-8e39f58c3af5 reason/AddedInterface Add eth0 [10.128.163.237/23]
Sep 09 08:06:12.848 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (15 times)
Sep 09 08:06:13.010 I ns/openshift-marketplace pod/redhat-operators-6w75w node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:06:13.149 I ns/openshift-marketplace pod/redhat-marketplace-mwrsg node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:06:13.388 I ns/e2e-downward-api-5033 pod/downwardapi-volume-150f1226-f8ec-4042-9a95-8e39f58c3af5 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:06:13.703 I ns/e2e-downward-api-5033 pod/downwardapi-volume-150f1226-f8ec-4042-9a95-8e39f58c3af5 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Created
Sep 09 08:06:13.846 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (16 times)
Sep 09 08:06:14.311 I ns/e2e-downward-api-5033 pod/downwardapi-volume-150f1226-f8ec-4042-9a95-8e39f58c3af5 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Started
Sep 09 08:06:15.863 W ns/e2e-downward-api-5033 pod/downwardapi-volume-150f1226-f8ec-4042-9a95-8e39f58c3af5 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:06:16.332 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (104 times)
Sep 09 08:06:18.972 W ns/e2e-downward-api-5033 pod/downwardapi-volume-150f1226-f8ec-4042-9a95-8e39f58c3af5 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:06:20.067 I ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:06:20.268 I ns/openshift-marketplace pod/community-operators-sh97c node/ostest-5xqm8-worker-0-rzx47 container/registry-server reason/Ready
Sep 09 08:06:20.703 I ns/e2e-statefulset-858 pod/ss-1 node/ reason/Created
Sep 09 08:06:20.835 I ns/e2e-statefulset-858 statefulset/ss reason/SuccessfulCreate create Pod ss-1 in StatefulSet ss successful
Sep 09 08:06:20.920 I ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:06:22.097 W ns/e2e-subpath-4862 pod/pod-subpath-test-downwardapi-clmj node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:06:22.285 I ns/e2e-kubelet-test-4060 pod/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 node/ reason/Created
Sep 09 08:06:22.374 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (17 times)
Sep 09 08:06:22.390 I ns/e2e-kubelet-test-4060 pod/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:06:24.035 W ns/e2e-subpath-4862 pod/pod-subpath-test-downwardapi-clmj node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:06:24.536 I ns/e2e-webhook-2407 pod/sample-webhook-deployment-7bc8486f8c-qzpxl reason/AddedInterface Add eth0 [10.128.172.3/23]
Sep 09 08:06:25.296 I ns/e2e-webhook-2407 pod/sample-webhook-deployment-7bc8486f8c-qzpxl node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:06:25.328 I ns/e2e-projected-3613 pod/downwardapi-volume-2ff0cadc-1b30-42d5-916b-4223ede697d9 node/ reason/Created
Sep 09 08:06:25.370 I ns/e2e-projected-3613 pod/downwardapi-volume-2ff0cadc-1b30-42d5-916b-4223ede697d9 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:06:25.571 I ns/e2e-webhook-2407 pod/sample-webhook-deployment-7bc8486f8c-qzpxl node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Created
Sep 09 08:06:25.630 I ns/e2e-webhook-2407 pod/sample-webhook-deployment-7bc8486f8c-qzpxl node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Started
Sep 09 08:06:27.516 I ns/e2e-webhook-2407 pod/sample-webhook-deployment-7bc8486f8c-qzpxl node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Ready
Sep 09 08:06:27.864 I ns/e2e-crd-webhook-3873 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-797lf reason/AddedInterface Add eth0 [10.128.177.225/23]
Sep 09 08:06:28.572 I ns/e2e-crd-webhook-3873 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-797lf node/ostest-5xqm8-worker-0-twrlr container/sample-crd-conversion-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:06:28.889 I ns/e2e-crd-webhook-3873 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-797lf node/ostest-5xqm8-worker-0-twrlr container/sample-crd-conversion-webhook reason/Created
Sep 09 08:06:28.945 I ns/e2e-crd-webhook-3873 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-797lf node/ostest-5xqm8-worker-0-twrlr container/sample-crd-conversion-webhook reason/Started
Sep 09 08:06:29.765 I ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 node/ostest-5xqm8-worker-0-cbbx9 container/busybox reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:06:30.026 I ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 node/ostest-5xqm8-worker-0-cbbx9 container/busybox reason/Created
Sep 09 08:06:30.085 I ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 node/ostest-5xqm8-worker-0-cbbx9 container/busybox reason/Started
Sep 09 08:06:30.577 W ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 node/ostest-5xqm8-worker-0-cbbx9 container/busybox reason/Restarted
Sep 09 08:06:30.638 I ns/e2e-crd-webhook-3873 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-797lf node/ostest-5xqm8-worker-0-twrlr container/sample-crd-conversion-webhook reason/Ready
Sep 09 08:06:30.779 W ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:06:32.387 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (18 times)
Sep 09 08:06:33.157 I ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad node/ reason/Created
Sep 09 08:06:33.222 I ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:06:35.378 W ns/e2e-container-probe-8702 pod/busybox-0259525d-1527-4a9a-862f-e5a38a82a513 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:06:35.643 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-m4bk9 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_webserver-deployment-dd94f59b7-m4bk9_e2e-deployment-4302_4c3933fd-44cd-4dae-ab7e-778375226540_0(e0882c06159cf5e437f3ab5be30997fa9cad5851de30ad2ba69afa5aa22873d1): [e2e-deployment-4302/webserver-deployment-dd94f59b7-m4bk9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:06:37.141 W ns/e2e-crd-webhook-3873 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-797lf node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:06:38.307 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ reason/Created
Sep 09 08:06:38.378 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:06:39.061 W ns/e2e-crd-webhook-3873 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-797lf node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:06:39.061 W ns/e2e-crd-webhook-3873 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-797lf node/ostest-5xqm8-worker-0-twrlr container/sample-crd-conversion-webhook reason/NotReady
Sep 09 08:06:42.106 W ns/e2e-webhook-2407 pod/sample-webhook-deployment-7bc8486f8c-qzpxl node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:06:42.377 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (19 times)
Sep 09 08:06:43.312 I ns/e2e-container-runtime-6950 pod/termination-message-container20de9b98-16ba-4681-874f-d6e5dc2f7b08 node/ reason/Created
Sep 09 08:06:43.403 I ns/e2e-container-runtime-6950 pod/termination-message-container20de9b98-16ba-4681-874f-d6e5dc2f7b08 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:06:43.745 W ns/e2e-webhook-2407 pod/sample-webhook-deployment-7bc8486f8c-qzpxl node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:06:43.745 W ns/e2e-webhook-2407 pod/sample-webhook-deployment-7bc8486f8c-qzpxl node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/NotReady
Sep 09 04:06:48.357 I test="[sig-apps] Deployment deployment should support proportional scaling [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" failed
Sep 09 04:06:51.283 - 194s  I test="[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" running
Sep 09 08:06:54.516 I ns/e2e-services-8942 pod/externalname-service-x6xw2 node/ reason/Created
Sep 09 08:06:54.677 I ns/e2e-services-8942 pod/externalname-service-x6xw2 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:06:54.693 I ns/e2e-services-8942 replicationcontroller/externalname-service reason/SuccessfulCreate Created pod: externalname-service-x6xw2
Sep 09 08:06:54.879 I ns/e2e-services-8942 pod/externalname-service-jzvt7 node/ reason/Created
Sep 09 08:06:54.997 I ns/e2e-services-8942 pod/externalname-service-jzvt7 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:06:55.018 I ns/e2e-services-8942 replicationcontroller/externalname-service reason/SuccessfulCreate Created pod: externalname-service-jzvt7
Sep 09 08:06:55.843 I ns/e2e-statefulset-858 pod/ss-1 reason/AddedInterface Add eth0 [10.128.130.106/23]
Sep 09 08:06:56.498 I ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:06:56.786 I ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:06:56.866 I ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:06:57.876 W ns/e2e-webhook-2407 pod/sample-webhook-deployment-7bc8486f8c-qzpxl node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:06:57.999 I ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 08:06:58.127 I ns/e2e-statefulset-858 pod/ss-2 node/ reason/Created
Sep 09 08:06:58.161 I ns/e2e-statefulset-858 statefulset/ss reason/SuccessfulCreate create Pod ss-2 in StatefulSet ss successful
Sep 09 08:06:58.224 I ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:06:58.422 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod reason/AddedInterface Add eth0 [10.128.135.85/23]
Sep 09 08:06:59.250 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-1 reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:06:59.552 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-1 reason/Created
Sep 09 08:06:59.597 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-1 reason/Started
Sep 09 08:06:59.638 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-2 reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:07:00.082 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-2 reason/Created
Sep 09 08:07:00.267 W ns/e2e-crd-webhook-3873 pod/sample-crd-conversion-webhook-deployment-84c84cf5f9-797lf node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:07:00.359 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-2 reason/Started
Sep 09 08:07:00.719 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-3 reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:07:00.971 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-3 reason/Created
Sep 09 08:07:01.075 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-3 reason/Started
Sep 09 08:07:01.191 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-3 reason/Ready
Sep 09 08:07:01.191 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-2 reason/Ready
Sep 09 08:07:01.191 I ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-1 reason/Ready
Sep 09 08:07:02.208 I ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ reason/Created
Sep 09 08:07:02.374 I ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:07:03.359 I ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 container/busybox-1 reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:07:03.651 I ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 container/busybox-1 reason/Created
Sep 09 08:07:03.704 I ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 container/busybox-1 reason/Started
Sep 09 08:07:03.724 I ns/e2e-replicaset-6896 pod/pod-adoption-release reason/AddedInterface Add eth0 [10.128.180.183/23]
Sep 09 08:07:03.727 I ns/e2e-projected-3613 pod/downwardapi-volume-2ff0cadc-1b30-42d5-916b-4223ede697d9 reason/AddedInterface Add eth0 [10.128.139.159/23]
Sep 09 08:07:03.742 I ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 container/busybox-2 reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:07:03.968 W ns/e2e-replicaset-6896 pod/pod-adoption-release node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:07:04.012 I ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 container/busybox-2 reason/Created
Sep 09 08:07:04.034 I ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 container/busybox-2 reason/Started
Sep 09 08:07:04.390 I ns/e2e-replicaset-6896 pod/pod-adoption-release node/ostest-5xqm8-worker-0-rzx47 container/pod-adoption-release reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:07:04.477 I ns/e2e-projected-3613 pod/downwardapi-volume-2ff0cadc-1b30-42d5-916b-4223ede697d9 node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:07:04.690 I ns/e2e-replicaset-6896 pod/pod-adoption-release node/ostest-5xqm8-worker-0-rzx47 container/pod-adoption-release reason/Created
Sep 09 08:07:04.728 I ns/e2e-replicaset-6896 pod/pod-adoption-release node/ostest-5xqm8-worker-0-rzx47 container/pod-adoption-release reason/Started
Sep 09 08:07:04.760 I ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 container/busybox-2 reason/Ready
Sep 09 08:07:04.760 I ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 container/busybox-1 reason/Ready
Sep 09 08:07:04.782 I ns/e2e-projected-3613 pod/downwardapi-volume-2ff0cadc-1b30-42d5-916b-4223ede697d9 node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Created
Sep 09 08:07:04.899 I ns/e2e-projected-3613 pod/downwardapi-volume-2ff0cadc-1b30-42d5-916b-4223ede697d9 node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Started
Sep 09 08:07:05.127 I ns/e2e-replicaset-6896 pod/pod-adoption-release node/ostest-5xqm8-worker-0-rzx47 container/pod-adoption-release reason/Ready
Sep 09 08:07:05.951 W ns/e2e-projected-3613 pod/downwardapi-volume-2ff0cadc-1b30-42d5-916b-4223ede697d9 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:07:06.224 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p26k2 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:07:06.239 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-48q9w node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:07:06.240 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-prjxj node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:07:06.241 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-sqscr node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:07:06.242 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-m4bk9 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:07:06.245 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-hvwv8 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:07:06.252 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-pfc5z node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:07:06.266 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-t6zlg node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:07:06.273 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-8pm7c node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:07:06.273 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p88mh node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:07:06.980 I ns/e2e-replicaset-6896 pod/pod-adoption-release-t6wjc node/ reason/Created
Sep 09 08:07:07.789 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-pfc5z node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:07:07.789 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-pfc5z node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/NotReady
Sep 09 08:07:07.838 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-hvwv8 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:07:07.838 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-hvwv8 node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/NotReady
Sep 09 08:07:07.906 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-48q9w node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:07:07.906 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-48q9w node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/NotReady
Sep 09 08:07:08.369 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-sqscr node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:07:08.369 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-sqscr node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/NotReady
Sep 09 08:07:08.380 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-t6zlg node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:07:08.380 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-t6zlg node/ostest-5xqm8-worker-0-twrlr container/httpd reason/NotReady
Sep 09 08:07:08.479 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-prjxj node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:07:08.479 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-prjxj node/ostest-5xqm8-worker-0-twrlr container/httpd reason/NotReady
Sep 09 08:07:08.536 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-8pm7c node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:07:08.536 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-8pm7c node/ostest-5xqm8-worker-0-rzx47 container/httpd reason/NotReady
Sep 09 08:07:08.865 I ns/e2e-kubectl-5352 pod/e2e-test-httpd-pod node/ reason/Created
Sep 09 08:07:09.023 I ns/e2e-kubectl-5352 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:07:09.185 W ns/e2e-kubectl-5352 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:07:09.348 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p88mh node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:07:09.348 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p88mh node/ostest-5xqm8-worker-0-twrlr container/httpd reason/NotReady
Sep 09 08:07:10.775 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-pfc5z node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:07:10.788 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-prjxj node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:07:10.806 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-hvwv8 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:07:10.814 W ns/e2e-projected-3613 pod/downwardapi-volume-2ff0cadc-1b30-42d5-916b-4223ede697d9 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:07:10.936 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p26k2 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:07:11.054 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-48q9w node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:07:11.088 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-sqscr node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:07:11.664 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-t6zlg node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:07:11.714 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-m4bk9 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:07:11.771 I ns/e2e-services-296 pod/pod1 node/ reason/Created
Sep 09 08:07:11.910 I ns/e2e-services-296 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:07:12.173 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-8pm7c node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:07:12.177 W ns/e2e-deployment-4302 pod/webserver-deployment-dd94f59b7-p88mh node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:07:14.000 W ns/openshift-kuryr pod/kuryr-cni-7sd9x node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "http://10.196.2.196:8090/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 09 08:07:14.159 I ns/e2e-emptydir-2999 pod/pod-99802690-828c-434c-b9c2-8aa9be493a97 node/ reason/Created
Sep 09 08:07:14.259 I ns/e2e-emptydir-2999 pod/pod-99802690-828c-434c-b9c2-8aa9be493a97 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:07:17.548 I ns/e2e-kubelet-test-4060 pod/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 reason/AddedInterface Add eth0 [10.128.124.126/23]
Sep 09 08:07:17.870 I ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad reason/AddedInterface Add eth0 [10.128.129.150/23]
Sep 09 08:07:18.226 I ns/e2e-kubelet-test-4060 pod/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:07:18.432 W ns/e2e-replicaset-6896 pod/pod-adoption-release node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:07:18.468 W ns/e2e-replicaset-6896 pod/pod-adoption-release-t6wjc node/ reason/GracefulDelete in 0s
Sep 09 08:07:18.568 W ns/e2e-replicaset-6896 pod/pod-adoption-release-t6wjc node/ reason/Deleted
Sep 09 08:07:18.577 I ns/e2e-kubelet-test-4060 pod/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 reason/Created
Sep 09 08:07:18.600 I ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad node/ostest-5xqm8-worker-0-twrlr container/configmap-volume-data-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:07:18.639 I ns/e2e-kubelet-test-4060 pod/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 reason/Started
Sep 09 08:07:18.862 I ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad node/ostest-5xqm8-worker-0-twrlr container/configmap-volume-data-test reason/Created
Sep 09 08:07:18.930 I ns/e2e-kubelet-test-4060 pod/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 reason/Ready
Sep 09 08:07:18.948 I ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad node/ostest-5xqm8-worker-0-twrlr container/configmap-volume-data-test reason/Started
Sep 09 08:07:18.995 I ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad node/ostest-5xqm8-worker-0-twrlr container/configmap-volume-binary-test reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:07:19.291 I ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad node/ostest-5xqm8-worker-0-twrlr container/configmap-volume-binary-test reason/Created
Sep 09 08:07:19.462 I ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad node/ostest-5xqm8-worker-0-twrlr container/configmap-volume-binary-test reason/Started
Sep 09 08:07:19.495 I ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad node/ostest-5xqm8-worker-0-twrlr container/configmap-volume-data-test reason/Ready
Sep 09 08:07:20.344 W ns/e2e-replicaset-6896 pod/pod-adoption-release node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:07:20.344 W ns/e2e-replicaset-6896 pod/pod-adoption-release node/ostest-5xqm8-worker-0-rzx47 container/pod-adoption-release reason/NotReady
Sep 09 08:07:22.128 I ns/e2e-downward-api-1724 pod/annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12 node/ reason/Created
Sep 09 08:07:22.252 I ns/e2e-downward-api-1724 pod/annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:07:22.255 W ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:07:22.283 W ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:07:22.534 W ns/e2e-kubectl-5352 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:07:26.147 I ns/e2e-webhook-5921 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 08:07:26.570 I ns/e2e-webhook-5921 pod/sample-webhook-deployment-7bc8486f8c-rszmk node/ reason/Created
Sep 09 08:07:26.938 I ns/e2e-webhook-5921 pod/sample-webhook-deployment-7bc8486f8c-rszmk node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:07:26.982 I ns/e2e-webhook-5921 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-rszmk
Sep 09 08:07:27.095 I ns/e2e-services-8942 pod/externalname-service-jzvt7 reason/AddedInterface Add eth0 [10.128.148.79/23]
Sep 09 08:07:27.418 W ns/e2e-replicaset-6896 pod/pod-adoption-release node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:07:27.511 I ns/e2e-container-runtime-6950 pod/termination-message-container20de9b98-16ba-4681-874f-d6e5dc2f7b08 reason/AddedInterface Add eth0 [10.128.141.193/23]
Sep 09 08:07:27.685 I ns/e2e-statefulset-858 pod/ss-2 reason/AddedInterface Add eth0 [10.128.131.150/23]
Sep 09 08:07:27.972 I ns/e2e-services-8942 pod/externalname-service-jzvt7 node/ostest-5xqm8-worker-0-rzx47 container/externalname-service reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:07:28.231 I ns/e2e-services-8942 pod/externalname-service-jzvt7 node/ostest-5xqm8-worker-0-rzx47 container/externalname-service reason/Created
Sep 09 08:07:28.316 I ns/e2e-services-8942 pod/externalname-service-jzvt7 node/ostest-5xqm8-worker-0-rzx47 container/externalname-service reason/Started
Sep 09 08:07:28.333 I ns/e2e-container-runtime-6950 pod/termination-message-container20de9b98-16ba-4681-874f-d6e5dc2f7b08 node/ostest-5xqm8-worker-0-rzx47 container/termination-message-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:07:28.413 W ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-5d967547-3319-4814-aa9b-0a8bebce3071_e2e-dns-1855_9016ac5b-e556-49e5-b4b9-8c29f2e183ab_0(cbf62240bd9132bffd82b8ba3a42cd58845216193a0de8b8605f5d6f2ec65cd9): [e2e-dns-1855/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:07:28.417 I ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:07:28.445 I ns/e2e-services-8942 pod/externalname-service-jzvt7 node/ostest-5xqm8-worker-0-rzx47 container/externalname-service reason/Ready
Sep 09 08:07:28.451 W ns/e2e-services-296 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod1_e2e-services-296_2abf9d32-18bf-4f53-a0f1-e189488ad948_0(60e124b504c29415bcc9e4528c2a99cd1b6d6fe413ed41584ea953b5468b1b8f): [e2e-services-296/pod1:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:07:28.504 W ns/e2e-downward-api-1724 pod/annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12_e2e-downward-api-1724_b98cb0f0-d700-4843-ae17-157fc2637386_0(dc3982ee13104683a7c552f7536c34e1411dca29fddaa1ae59a29eaabd6b0806): [e2e-downward-api-1724/annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": read tcp 127.0.0.1:53762->127.0.0.1:5036: read: connection reset by peer
Sep 09 08:07:28.661 I ns/e2e-container-runtime-6950 pod/termination-message-container20de9b98-16ba-4681-874f-d6e5dc2f7b08 node/ostest-5xqm8-worker-0-rzx47 container/termination-message-container reason/Created
Sep 09 08:07:28.733 I ns/e2e-container-runtime-6950 pod/termination-message-container20de9b98-16ba-4681-874f-d6e5dc2f7b08 node/ostest-5xqm8-worker-0-rzx47 container/termination-message-container reason/Started
Sep 09 08:07:28.814 I ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Created
Sep 09 08:07:28.891 I ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Started
Sep 09 08:07:29.010 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/NotReady
Sep 09 08:07:29.010 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Restarted
Sep 09 08:07:29.339 W clusteroperator/network changed Progressing to True: Deploying: DaemonSet "openshift-kuryr/kuryr-cni" is not available (awaiting 1 nodes)
Sep 09 08:07:30.392 W ns/e2e-container-runtime-6950 pod/termination-message-container20de9b98-16ba-4681-874f-d6e5dc2f7b08 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:07:30.514 I ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Ready
Sep 09 08:07:33.376 I ns/e2e-webhook-6116 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 08:07:33.442 I ns/e2e-webhook-6116 pod/sample-webhook-deployment-7bc8486f8c-68lqc node/ reason/Created
Sep 09 08:07:33.512 I ns/e2e-webhook-6116 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-68lqc
Sep 09 08:07:33.535 I ns/e2e-webhook-6116 pod/sample-webhook-deployment-7bc8486f8c-68lqc node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:07:34.000 W ns/e2e-container-runtime-6950 pod/termination-message-container20de9b98-16ba-4681-874f-d6e5dc2f7b08 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:07:34.005 W ns/openshift-kuryr pod/kuryr-cni-7sd9x node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "http://10.196.2.196:8090/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (2 times)
Sep 09 08:07:37.859 W ns/e2e-kubelet-test-4060 pod/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:07:39.013 W ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:07:40.728 W ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:07:40.728 W ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad node/ostest-5xqm8-worker-0-twrlr container/configmap-volume-data-test reason/NotReady
Sep 09 08:07:40.901 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/NotReady
Sep 09 08:07:40.929 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404
Sep 09 08:07:41.007 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/NotReady
Sep 09 08:07:41.458 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404
Sep 09 08:07:41.493 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/NotReady
Sep 09 08:07:41.672 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 reason/AddedInterface Add eth0 [10.128.118.76/23]
Sep 09 08:07:41.933 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (2 times)
Sep 09 08:07:42.436 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:07:42.445 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (2 times)
Sep 09 08:07:42.703 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:07:42.910 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:07:43.013 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:07:43.076 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (3 times)
Sep 09 08:07:43.196 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Created
Sep 09 08:07:43.246 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Started
Sep 09 08:07:43.271 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Pulling image/gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0
Sep 09 08:07:43.462 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (3 times)
Sep 09 08:07:43.941 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (4 times)
Sep 09 08:07:44.458 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (4 times)
Sep 09 08:07:44.930 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (5 times)
Sep 09 08:07:45.509 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (5 times)
Sep 09 08:07:45.954 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (6 times)
Sep 09 08:07:46.052 I ns/e2e-services-8942 pod/externalname-service-x6xw2 reason/AddedInterface Add eth0 [10.128.148.3/23]
Sep 09 08:07:46.453 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (6 times)
Sep 09 08:07:46.675 I ns/e2e-services-8942 pod/externalname-service-x6xw2 node/ostest-5xqm8-worker-0-twrlr container/externalname-service reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:07:46.928 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (7 times)
Sep 09 08:07:47.015 I ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Ready
Sep 09 08:07:47.060 I ns/e2e-services-8942 pod/externalname-service-x6xw2 node/ostest-5xqm8-worker-0-twrlr container/externalname-service reason/Created
Sep 09 08:07:47.277 W clusteroperator/network changed Progressing to False
Sep 09 08:07:47.318 I ns/e2e-services-8942 pod/externalname-service-x6xw2 node/ostest-5xqm8-worker-0-twrlr container/externalname-service reason/Started
Sep 09 08:07:47.503 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (7 times)
Sep 09 08:07:47.519 I ns/e2e-services-296 pod/pod1 reason/AddedInterface Add eth0 [10.128.169.149/23]
Sep 09 08:07:47.766 I ns/e2e-services-8942 pod/externalname-service-x6xw2 node/ostest-5xqm8-worker-0-twrlr container/externalname-service reason/Ready
Sep 09 08:07:47.932 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (8 times)
Sep 09 08:07:48.113 I ns/e2e-services-296 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 container/pause reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:07:48.405 I ns/e2e-services-296 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 container/pause reason/Created
Sep 09 08:07:48.437 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (8 times)
Sep 09 08:07:48.459 I ns/e2e-services-296 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 container/pause reason/Started
Sep 09 08:07:48.499 I ns/e2e-services-8942 pod/execpodghx5k node/ reason/Created
Sep 09 08:07:48.726 I ns/e2e-services-8942 pod/execpodghx5k node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:07:48.927 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (9 times)
Sep 09 08:07:48.971 W ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:07:48.971 W ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad node/ostest-5xqm8-worker-0-twrlr pod has been pending longer than a minute
Sep 09 08:07:49.039 I ns/e2e-services-296 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 container/pause reason/Ready
Sep 09 08:07:49.457 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (9 times)
Sep 09 08:07:49.863 I ns/e2e-services-296 pod/pod2 node/ reason/Created
Sep 09 08:07:49.930 I ns/e2e-services-296 pod/pod2 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:07:49.931 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (10 times)
Sep 09 08:07:50.446 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (10 times)
Sep 09 08:07:50.940 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (11 times)
Sep 09 08:07:51.442 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (11 times)
Sep 09 08:07:51.928 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (12 times)
Sep 09 08:07:52.484 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (12 times)
Sep 09 08:07:52.503 W ns/e2e-configmap-3403 pod/pod-configmaps-795cba48-2262-4dd2-97e1-c4984a9ea1ad node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:07:53.009 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (13 times)
Sep 09 08:07:53.453 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (13 times)
Sep 09 08:07:53.565 E ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-3 container exited with code 137 (Error): 
Sep 09 08:07:53.565 E ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-2 container exited with code 137 (Error): 
Sep 09 08:07:53.565 E ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/busybox-1 container exited with code 137 (Error): 
Sep 09 08:07:53.895 I ns/e2e-webhook-5921 pod/sample-webhook-deployment-7bc8486f8c-rszmk reason/AddedInterface Add eth0 [10.128.160.153/23]
Sep 09 08:07:53.943 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (14 times)
Sep 09 08:07:54.434 I ns/e2e-emptydir-2999 pod/pod-99802690-828c-434c-b9c2-8aa9be493a97 reason/AddedInterface Add eth0 [10.128.122.230/23]
Sep 09 08:07:54.488 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (14 times)
Sep 09 08:07:54.638 I ns/e2e-webhook-5921 pod/sample-webhook-deployment-7bc8486f8c-rszmk node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:07:54.741 I ns/e2e-services-8942 pod/execpodghx5k reason/AddedInterface Add eth0 [10.128.149.246/23]
Sep 09 08:07:54.965 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (15 times)
Sep 09 08:07:55.073 I ns/e2e-webhook-5921 pod/sample-webhook-deployment-7bc8486f8c-rszmk node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Created
Sep 09 08:07:55.170 I ns/e2e-webhook-5921 pod/sample-webhook-deployment-7bc8486f8c-rszmk node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Started
Sep 09 08:07:55.301 I ns/e2e-emptydir-2999 pod/pod-99802690-828c-434c-b9c2-8aa9be493a97 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:07:55.443 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (15 times)
Sep 09 08:07:55.568 I ns/e2e-services-8942 pod/execpodghx5k node/ostest-5xqm8-worker-0-rzx47 container/agnhost-pause reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:07:55.603 I ns/e2e-emptydir-2999 pod/pod-99802690-828c-434c-b9c2-8aa9be493a97 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:07:55.901 I ns/e2e-webhook-5921 pod/sample-webhook-deployment-7bc8486f8c-rszmk node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Ready
Sep 09 08:07:55.924 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (16 times)
Sep 09 08:07:55.924 I ns/e2e-services-8942 pod/execpodghx5k node/ostest-5xqm8-worker-0-rzx47 container/agnhost-pause reason/Created
Sep 09 08:07:55.992 I ns/e2e-services-8942 pod/execpodghx5k node/ostest-5xqm8-worker-0-rzx47 container/agnhost-pause reason/Started
Sep 09 08:07:56.191 I ns/e2e-emptydir-2999 pod/pod-99802690-828c-434c-b9c2-8aa9be493a97 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:07:56.547 I ns/e2e-services-8942 pod/execpodghx5k node/ostest-5xqm8-worker-0-rzx47 container/agnhost-pause reason/Ready
Sep 09 08:07:56.644 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (16 times)
Sep 09 08:07:56.936 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (17 times)
Sep 09 08:07:57.234 W ns/e2e-kubelet-etc-hosts-1137 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:07:57.366 W ns/e2e-emptydir-2999 pod/pod-99802690-828c-434c-b9c2-8aa9be493a97 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:07:57.442 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (17 times)
Sep 09 08:07:57.763 I ns/e2e-webhook-5921 pod/webhook-to-be-mutated node/ reason/Created
Sep 09 08:07:57.826 I ns/e2e-webhook-5921 pod/webhook-to-be-mutated node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:07:57.973 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (18 times)
Sep 09 08:07:58.179 W ns/e2e-webhook-5921 pod/sample-webhook-deployment-7bc8486f8c-rszmk node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:07:58.450 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (18 times)
Sep 09 08:07:58.949 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (19 times)
Sep 09 08:07:59.178 W ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:07:59.178 W ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 container/busybox-1 reason/NotReady
Sep 09 08:07:59.178 W ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 container/busybox-2 reason/NotReady
Sep 09 08:07:59.343 I ns/e2e-containers-1304 pod/client-containers-38efc682-e620-417f-842c-f65cd3a6ae36 node/ reason/Created
Sep 09 08:07:59.417 I ns/e2e-containers-1304 pod/client-containers-38efc682-e620-417f-842c-f65cd3a6ae36 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:07:59.472 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (19 times)
Sep 09 08:07:59.596 W ns/e2e-webhook-5921 pod/sample-webhook-deployment-7bc8486f8c-rszmk node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:07:59.596 W ns/e2e-webhook-5921 pod/sample-webhook-deployment-7bc8486f8c-rszmk node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/NotReady
Sep 09 08:07:59.668 I ns/e2e-webhook-6116 pod/sample-webhook-deployment-7bc8486f8c-68lqc reason/AddedInterface Add eth0 [10.128.184.193/23]
Sep 09 08:07:59.882 W ns/e2e-emptydir-2999 pod/pod-99802690-828c-434c-b9c2-8aa9be493a97 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:07:59.938 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (20 times)
Sep 09 08:08:00.450 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (20 times)
Sep 09 08:08:00.457 W ns/e2e-webhook-5921 pod/sample-webhook-deployment-7bc8486f8c-rszmk node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:08:00.461 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Pulled image/gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0
Sep 09 08:08:00.577 I ns/e2e-webhook-6116 pod/sample-webhook-deployment-7bc8486f8c-68lqc node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:08:00.812 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Created
Sep 09 08:08:00.884 I ns/e2e-webhook-6116 pod/sample-webhook-deployment-7bc8486f8c-68lqc node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Created
Sep 09 08:08:00.918 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Started
Sep 09 08:08:00.931 I ns/e2e-webhook-6116 pod/sample-webhook-deployment-7bc8486f8c-68lqc node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Started
Sep 09 08:08:00.946 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (21 times)
Sep 09 08:08:01.171 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Ready
Sep 09 08:08:01.171 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 08:08:01.171 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Ready
Sep 09 08:08:01.506 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (21 times)
Sep 09 08:08:01.919 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (22 times)
Sep 09 08:08:02.462 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (22 times)
Sep 09 08:08:02.520 I ns/e2e-webhook-6116 pod/sample-webhook-deployment-7bc8486f8c-68lqc node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Ready
Sep 09 08:08:02.588 I ns/e2e-replication-controller-6889 pod/pod-adoption node/ reason/Created
Sep 09 08:08:02.626 I ns/e2e-replication-controller-6889 pod/pod-adoption node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:08:02.894 I ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:08:02.971 I ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 08:08:03.032 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:08:03.052 I ns/e2e-statefulset-858 statefulset/ss reason/SuccessfulDelete delete Pod ss-2 in StatefulSet ss successful
Sep 09 08:08:03.968 W ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:08:04.886 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:08:05.387 W ns/e2e-webhook-6116 pod/sample-webhook-deployment-7bc8486f8c-68lqc node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:08:06.675 E ns/e2e-webhook-6116 pod/sample-webhook-deployment-7bc8486f8c-68lqc node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook container exited with code 2 (Error): 
Sep 09 08:08:06.907 W ns/e2e-kubelet-etc-hosts-1137 pod/test-host-network-pod node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:07.046 I ns/e2e-replication-controller-1703 pod/pod-release-88cdg node/ reason/Created
Sep 09 08:08:07.133 I ns/e2e-replication-controller-1703 replicationcontroller/pod-release reason/SuccessfulCreate Created pod: pod-release-88cdg
Sep 09 08:08:07.195 I ns/e2e-replication-controller-1703 pod/pod-release-88cdg node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:08:08.239 W ns/e2e-webhook-5921 pod/webhook-to-be-mutated node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:08:08.241 W ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:08.336 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Killing
Sep 09 08:08:08.354 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Killing
Sep 09 08:08:08.368 I ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Killing
Sep 09 08:08:09.237 W ns/e2e-kubelet-test-4060 pod/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:08:09.237 W ns/e2e-kubelet-test-4060 pod/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 reason/NotReady
Sep 09 08:08:10.929 I ns/e2e-events-6249 / reason/Test This is event-test
Sep 09 08:08:11.055 I ns/e2e-events-6249 / reason/Test This is a test event - patched
Sep 09 08:08:11.130 W ns/e2e-webhook-6116 pod/sample-webhook-deployment-7bc8486f8c-68lqc node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:08:11.943 W ns/e2e-dns-1855 pod/dns-test-5d967547-3319-4814-aa9b-0a8bebce3071 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:12.153 I ns/e2e-replication-controller-1703 pod/pod-release-fv52z node/ reason/Created
Sep 09 08:08:12.187 I ns/e2e-replication-controller-1703 replicationcontroller/pod-release reason/SuccessfulCreate Created pod: pod-release-fv52z
Sep 09 08:08:12.230 I ns/e2e-replication-controller-1703 pod/pod-release-fv52z node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:08:12.284 I ns/e2e-services-296 pod/pod2 reason/AddedInterface Add eth0 [10.128.168.104/23]
Sep 09 08:08:12.374 I ns/e2e-emptydir-4509 pod/pod-sharedvolume-3f7cf5a7-3319-4b6a-b059-9773ded444a1 node/ reason/Created
Sep 09 08:08:12.431 W ns/e2e-statefulset-858 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:08:12.439 I ns/e2e-emptydir-4509 pod/pod-sharedvolume-3f7cf5a7-3319-4b6a-b059-9773ded444a1 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:08:12.480 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:08:12.536 I ns/e2e-statefulset-858 statefulset/ss reason/SuccessfulDelete delete Pod ss-1 in StatefulSet ss successful
Sep 09 08:08:12.961 I ns/e2e-services-296 pod/pod2 node/ostest-5xqm8-worker-0-twrlr container/pause reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:08:12.978 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/NotReady
Sep 09 08:08:13.298 I ns/e2e-services-296 pod/pod2 node/ostest-5xqm8-worker-0-twrlr container/pause reason/Created
Sep 09 08:08:13.336 I ns/e2e-services-296 pod/pod2 node/ostest-5xqm8-worker-0-twrlr container/pause reason/Started
Sep 09 08:08:13.774 I ns/e2e-downward-api-1724 pod/annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12 reason/AddedInterface Add eth0 [10.128.147.222/23]
Sep 09 08:08:13.978 I ns/e2e-services-296 pod/pod2 node/ostest-5xqm8-worker-0-twrlr container/pause reason/Ready
Sep 09 08:08:14.256 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:08:14.343 I ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 node/ reason/Created
Sep 09 08:08:14.390 I ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:08:14.460 I ns/e2e-downward-api-1724 pod/annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:08:14.737 I ns/e2e-downward-api-1724 pod/annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Created
Sep 09 08:08:14.827 I ns/e2e-downward-api-1724 pod/annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Started
Sep 09 08:08:14.943 W ns/e2e-services-296 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:08:14.987 I ns/e2e-services-296 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 container/pause reason/Killing
Sep 09 08:08:15.078 W ns/e2e-services-296 service/endpoint-test2 reason/FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service e2e-services-296/endpoint-test2: Error updating endpoint-test2-zplzg EndpointSlice for Service e2e-services-296/endpoint-test2: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "endpoint-test2-zplzg": the object has been modified; please apply your changes to the latest version and try again
Sep 09 08:08:15.389 I ns/e2e-downward-api-1724 pod/annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Ready
Sep 09 08:08:16.032 W ns/e2e-services-296 pod/pod2 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:08:16.190 I ns/e2e-services-296 pod/pod2 node/ostest-5xqm8-worker-0-twrlr container/pause reason/Killing
Sep 09 08:08:16.580 W ns/e2e-services-296 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:08:16.580 W ns/e2e-services-296 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 container/pause reason/NotReady
Sep 09 08:08:17.114 W ns/e2e-kubelet-test-4060 pod/busybox-host-aliases316e2dee-f24f-4018-b5ab-4a8fd1e07286 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:17.383 W ns/e2e-webhook-5921 pod/webhook-to-be-mutated node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:08:18.081 W ns/e2e-services-296 pod/pod2 node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:08:18.081 W ns/e2e-services-296 pod/pod2 node/ostest-5xqm8-worker-0-twrlr container/pause reason/NotReady
Sep 09 08:08:18.612 W ns/e2e-statefulset-858 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:18.660 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:08:18.742 I ns/e2e-statefulset-858 statefulset/ss reason/SuccessfulDelete delete Pod ss-0 in StatefulSet ss successful
Sep 09 08:08:18.743 I ns/e2e-gc-7163 pod/simpletest.rc-8qbhk node/ reason/Created
Sep 09 08:08:18.779 I ns/e2e-gc-7163 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-8qbhk
Sep 09 08:08:18.823 I ns/e2e-gc-7163 pod/simpletest.rc-8qbhk node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:08:18.823 I ns/e2e-gc-7163 pod/simpletest.rc-kn8sx node/ reason/Created
Sep 09 08:08:18.849 I ns/e2e-gc-7163 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-kn8sx
Sep 09 08:08:18.899 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/NotReady
Sep 09 08:08:18.968 W ns/e2e-services-296 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:08:18.973 I ns/e2e-gc-7163 pod/simpletest.rc-kn8sx node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:08:19.672 W ns/e2e-services-296 pod/pod2 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:08:20.230 I ns/e2e-var-expansion-8808 pod/var-expansion-acf35875-bf61-4d9e-b0a1-d471889f4a33 node/ reason/Created
Sep 09 08:08:20.309 I ns/e2e-var-expansion-8808 pod/var-expansion-acf35875-bf61-4d9e-b0a1-d471889f4a33 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:08:23.224 W ns/e2e-replication-controller-1703 pod/pod-release-fv52z node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:23.779 W ns/e2e-gc-7163 pod/simpletest.rc-8qbhk node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:23.779 W ns/e2e-gc-7163 pod/simpletest.rc-kn8sx node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:08:23.994 W ns/e2e-replication-controller-1703 pod/pod-release-fv52z node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:25.214 W ns/e2e-replication-controller-1703 pod/pod-release-88cdg node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:26.832 W ns/e2e-services-296 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:27.576 W ns/e2e-statefulset-858 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:08:29.915 W ns/e2e-downward-api-1724 pod/annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:08:31.424 W ns/e2e-downward-api-1724 pod/annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:08:31.424 W ns/e2e-downward-api-1724 pod/annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/NotReady
Sep 09 08:08:32.512 I ns/e2e-containers-1304 pod/client-containers-38efc682-e620-417f-842c-f65cd3a6ae36 reason/AddedInterface Add eth0 [10.128.120.111/23]
Sep 09 08:08:33.211 I ns/e2e-containers-1304 pod/client-containers-38efc682-e620-417f-842c-f65cd3a6ae36 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:08:33.506 I ns/e2e-containers-1304 pod/client-containers-38efc682-e620-417f-842c-f65cd3a6ae36 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:08:33.541 I ns/e2e-containers-1304 pod/client-containers-38efc682-e620-417f-842c-f65cd3a6ae36 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:08:33.970 W ns/e2e-downward-api-1724 pod/annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:08:34.184 W ns/e2e-containers-1304 pod/client-containers-38efc682-e620-417f-842c-f65cd3a6ae36 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:08:37.031 I ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:37.670 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:37.727 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:37.727 W ns/e2e-downward-api-1724 pod/annotationupdateaac32481-0c05-4575-b31c-4f9f85906a12 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:37.728 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulCreate create Pod ss-0 in StatefulSet ss successful
Sep 09 08:08:38.005 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:38.092 W ns/e2e-statefulset-1701 statefulset/ss reason/RecreatingFailedPod StatefulSet e2e-statefulset-1701/ss is recreating failed Pod ss-0
Sep 09 08:08:38.132 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:38.173 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:38.228 W ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 reason/FailedMount MountVolume.SetUp failed for volume "default-token-s6jvs" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:08:38.286 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulDelete delete Pod ss-0 in StatefulSet ss successful
Sep 09 08:08:38.329 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:38.378 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:38.383 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulCreate create Pod ss-0 in StatefulSet ss successful (2 times)
Sep 09 08:08:38.411 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:38.470 W ns/e2e-statefulset-1701 statefulset/ss reason/RecreatingFailedPod StatefulSet e2e-statefulset-1701/ss is recreating failed Pod ss-0 (2 times)
Sep 09 08:08:38.538 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:38.561 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:38.647 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulDelete delete Pod ss-0 in StatefulSet ss successful (2 times)
Sep 09 08:08:38.743 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:38.782 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulCreate create Pod ss-0 in StatefulSet ss successful (3 times)
Sep 09 08:08:38.812 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:38.853 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:38.892 W ns/e2e-statefulset-1701 statefulset/ss reason/RecreatingFailedPod StatefulSet e2e-statefulset-1701/ss is recreating failed Pod ss-0 (3 times)
Sep 09 08:08:38.899 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:38.928 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:38.984 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulDelete delete Pod ss-0 in StatefulSet ss successful (3 times)
Sep 09 08:08:39.056 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:39.088 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:39.135 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:39.138 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulCreate create Pod ss-0 in StatefulSet ss successful (4 times)
Sep 09 08:08:39.175 W ns/e2e-statefulset-1701 statefulset/ss reason/RecreatingFailedPod StatefulSet e2e-statefulset-1701/ss is recreating failed Pod ss-0 (4 times)
Sep 09 08:08:39.204 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:39.234 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:39.308 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulDelete delete Pod ss-0 in StatefulSet ss successful (4 times)
Sep 09 08:08:39.372 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:39.432 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:39.473 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulCreate create Pod ss-0 in StatefulSet ss successful (5 times)
Sep 09 08:08:39.493 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:39.580 W ns/e2e-statefulset-1701 statefulset/ss reason/FailedCreate create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.
Sep 09 08:08:39.619 W ns/e2e-statefulset-1701 statefulset/ss reason/RecreatingFailedPod StatefulSet e2e-statefulset-1701/ss is recreating failed Pod ss-0 (5 times)
Sep 09 08:08:39.636 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:39.657 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:39.715 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulDelete delete Pod ss-0 in StatefulSet ss successful (5 times)
Sep 09 08:08:39.790 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:39.827 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:39.847 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulCreate create Pod ss-0 in StatefulSet ss successful (6 times)
Sep 09 08:08:39.859 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:39.918 W ns/e2e-statefulset-1701 statefulset/ss reason/RecreatingFailedPod StatefulSet e2e-statefulset-1701/ss is recreating failed Pod ss-0 (6 times)
Sep 09 08:08:39.941 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:39.997 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:40.034 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulDelete delete Pod ss-0 in StatefulSet ss successful (6 times)
Sep 09 08:08:40.134 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:40.160 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:40.193 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulCreate create Pod ss-0 in StatefulSet ss successful (7 times)
Sep 09 08:08:40.235 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:40.344 W ns/e2e-statefulset-1701 statefulset/ss reason/FailedCreate create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again. (2 times)
Sep 09 08:08:40.391 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:40.422 W ns/e2e-statefulset-1701 statefulset/ss reason/RecreatingFailedPod StatefulSet e2e-statefulset-1701/ss is recreating failed Pod ss-0 (7 times)
Sep 09 08:08:40.463 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:40.511 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulDelete delete Pod ss-0 in StatefulSet ss successful (7 times)
Sep 09 08:08:40.578 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:40.629 I ns/e2e-statefulset-1701 statefulset/ss reason/SuccessfulCreate create Pod ss-0 in StatefulSet ss successful (8 times)
Sep 09 08:08:40.630 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:40.671 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:40.750 W ns/e2e-statefulset-1701 statefulset/ss reason/RecreatingFailedPod StatefulSet e2e-statefulset-1701/ss is recreating failed Pod ss-0 (8 times)
Sep 09 08:08:40.762 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:40.809 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:41.002 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:41.060 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:41.099 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:41.159 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:41.222 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:41.350 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:41.384 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:41.422 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:41.429 W ns/openshift-operator-lifecycle-manager pod/packageserver-6bb6556b69-jpnn8 node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "https://10.128.5.10:5443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (3 times)
Sep 09 08:08:41.474 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:41.567 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:41.694 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:41.746 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:41.752 I ns/e2e-replication-controller-6889 pod/pod-adoption reason/AddedInterface Add eth0 [10.128.143.3/23]
Sep 09 08:08:41.795 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:41.871 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:41.943 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:42.060 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:42.097 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:42.125 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:42.152 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:42.183 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:42.264 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:42.354 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:42.360 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:42.501 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:42.544 I ns/e2e-replication-controller-6889 pod/pod-adoption node/ostest-5xqm8-worker-0-rzx47 container/pod-adoption reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:08:42.711 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:42.902 I ns/e2e-replication-controller-6889 pod/pod-adoption node/ostest-5xqm8-worker-0-rzx47 container/pod-adoption reason/Created
Sep 09 08:08:43.045 I ns/e2e-replication-controller-6889 pod/pod-adoption node/ostest-5xqm8-worker-0-rzx47 container/pod-adoption reason/Started
Sep 09 08:08:43.394 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:43.599 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:43.800 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:43.956 I ns/e2e-replication-controller-6889 pod/pod-adoption node/ostest-5xqm8-worker-0-rzx47 container/pod-adoption reason/Ready
Sep 09 08:08:43.960 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:44.038 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:44.212 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:44.253 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:44.272 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:44.355 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:44.416 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:44.595 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:44.639 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:44.666 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:44.753 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:44.852 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:45.079 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:45.144 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:45.175 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:45.253 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:45.284 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:45.401 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:45.456 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:45.468 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:45.558 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:45.584 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:45.806 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:45.832 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:45.882 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:45.918 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:45.953 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:46.042 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:46.109 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:46.154 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:46.252 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:46.297 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:46.457 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:46.553 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:46.605 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:46.688 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:46.729 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:46.825 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:46.865 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:47.012 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:47.124 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:47.170 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:47.185 I ns/e2e-container-probe-7604 pod/busybox-9c1528aa-c7d1-4c0a-93bc-0c538464a676 node/ reason/Created
Sep 09 08:08:47.235 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:47.261 I ns/e2e-container-probe-7604 pod/busybox-9c1528aa-c7d1-4c0a-93bc-0c538464a676 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:08:47.265 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:47.302 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:47.332 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:47.370 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:47.454 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:47.496 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:47.503 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:47.555 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:47.617 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:47.736 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:47.797 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:47.831 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:47.897 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:47.944 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:48.020 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:48.056 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:48.056 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:48.104 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:48.127 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:48.199 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:48.270 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:48.271 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:48.354 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:48.392 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:48.519 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:48.598 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:48.665 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:48.829 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:48.878 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:49.095 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:49.121 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:49.158 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:49.242 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:49.307 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:49.405 W ns/e2e-containers-1304 pod/client-containers-38efc682-e620-417f-842c-f65cd3a6ae36 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:08:49.477 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:49.571 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:49.671 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:49.885 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:50.002 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:50.248 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:50.474 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:50.625 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:50.800 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:50.942 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:50.994 I ns/e2e-replication-controller-5281 pod/condition-test-lft6n node/ reason/Created
Sep 09 08:08:51.092 I ns/e2e-replication-controller-5281 replicationcontroller/condition-test reason/SuccessfulCreate Created pod: condition-test-lft6n
Sep 09 08:08:51.174 I ns/e2e-replication-controller-5281 pod/condition-test-lft6n node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:08:51.294 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:51.395 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:51.508 I ns/e2e-replication-controller-5281 pod/condition-test-mkzpm node/ reason/Created
Sep 09 08:08:51.524 W ns/e2e-replication-controller-5281 replicationcontroller/condition-test reason/FailedCreate Error creating: pods "condition-test-p9rbv" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2
Sep 09 08:08:51.606 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:51.628 I ns/e2e-replication-controller-5281 replicationcontroller/condition-test reason/SuccessfulCreate Created pod: condition-test-mkzpm
Sep 09 08:08:51.699 I ns/e2e-replication-controller-5281 pod/condition-test-mkzpm node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:08:51.935 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:51.982 W ns/e2e-replication-controller-5281 replicationcontroller/condition-test reason/FailedCreate Error creating: pods "condition-test-cmxwx" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2
Sep 09 08:08:52.069 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:52.506 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:52.598 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:52.689 W ns/e2e-replication-controller-5281 replicationcontroller/condition-test reason/FailedCreate Error creating: pods "condition-test-d6kp6" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2
Sep 09 08:08:52.733 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:52.816 W ns/e2e-replication-controller-5281 replicationcontroller/condition-test reason/FailedCreate Error creating: pods "condition-test-2gn6v" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2
Sep 09 08:08:52.837 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:52.898 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:53.015 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:53.063 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:53.097 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:53.175 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:53.195 I ns/e2e-container-lifecycle-hook-4083 pod/pod-handle-http-request node/ reason/Created
Sep 09 08:08:53.196 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:53.249 I ns/e2e-container-lifecycle-hook-4083 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:08:53.311 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:53.355 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:53.396 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:53.449 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:53.474 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:53.538 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:53.569 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:53.605 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:53.671 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:53.703 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:53.766 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:53.804 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:53.830 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:53.848 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:53.872 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:53.965 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:54.006 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:54.045 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:54.136 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:54.163 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:54.268 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:54.327 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:54.357 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:54.502 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:54.563 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:54.779 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:54.843 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:54.886 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:54.935 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:54.964 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:55.040 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:55.090 I ns/e2e-projected-113 pod/downwardapi-volume-59e8352a-726c-4c3e-8c80-147c4771fc9a node/ reason/Created
Sep 09 08:08:55.120 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:55.128 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:55.189 I ns/e2e-projected-113 pod/downwardapi-volume-59e8352a-726c-4c3e-8c80-147c4771fc9a node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:08:55.213 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:55.251 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:55.422 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:55.491 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:55.555 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:55.666 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:55.762 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:55.902 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:55.926 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:55.965 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:56.009 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:56.070 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:56.208 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:56.236 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:56.243 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:56.267 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:56.292 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:56.382 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:56.412 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:56.435 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:56.483 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:56.505 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:56.586 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:56.643 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:56.643 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:56.691 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:56.715 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:56.727 I ns/e2e-emptydir-4509 pod/pod-sharedvolume-3f7cf5a7-3319-4b6a-b059-9773ded444a1 reason/AddedInterface Add eth0 [10.128.190.233/23]
Sep 09 08:08:56.832 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:56.868 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:56.906 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:56.960 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:57.022 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:57.101 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:57.155 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:57.203 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:57.360 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:57.363 I ns/e2e-emptydir-4509 pod/pod-sharedvolume-3f7cf5a7-3319-4b6a-b059-9773ded444a1 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-main-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:08:57.383 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:57.427 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:57.451 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:57.511 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:57.577 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:57.612 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:57.715 I ns/e2e-emptydir-4509 pod/pod-sharedvolume-3f7cf5a7-3319-4b6a-b059-9773ded444a1 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-main-container reason/Created
Sep 09 08:08:57.719 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:57.819 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:57.852 I ns/e2e-emptydir-4509 pod/pod-sharedvolume-3f7cf5a7-3319-4b6a-b059-9773ded444a1 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-main-container reason/Started
Sep 09 08:08:57.872 W ns/e2e-replication-controller-1703 pod/pod-release-88cdg node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:57.891 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:57.908 I ns/e2e-emptydir-4509 pod/pod-sharedvolume-3f7cf5a7-3319-4b6a-b059-9773ded444a1 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-sub-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:08:58.002 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:58.022 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:58.045 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:58.060 I ns/e2e-emptydir-4509 pod/pod-sharedvolume-3f7cf5a7-3319-4b6a-b059-9773ded444a1 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-sub-container reason/Created
Sep 09 08:08:58.094 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:58.097 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:58.216 I ns/e2e-emptydir-4509 pod/pod-sharedvolume-3f7cf5a7-3319-4b6a-b059-9773ded444a1 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-sub-container reason/Started
Sep 09 08:08:58.230 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:58.254 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:58.337 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:58.351 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:58.379 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:58.412 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:58.445 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:58.512 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:58.547 I ns/e2e-emptydir-4509 pod/pod-sharedvolume-3f7cf5a7-3319-4b6a-b059-9773ded444a1 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-main-container reason/Ready
Sep 09 08:08:58.548 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:58.609 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:58.646 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:58.665 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:58.736 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:58.802 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:58.805 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:58.842 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:58.880 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:59.010 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:59.045 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:59.111 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:59.337 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:59.437 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:08:59.622 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:08:59.693 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:08:59.718 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:08:59.839 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:08:59.871 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:00.038 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:00.104 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:00.188 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:00.350 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:00.371 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:01.063 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:01.166 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:01.175 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:01.256 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:01.297 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:01.393 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:01.461 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:01.528 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:01.639 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:01.673 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:01.817 W ns/e2e-gc-7163 pod/simpletest.rc-kn8sx node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:09:01.817 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:01.884 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:01.949 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:02.033 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:02.067 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:02.249 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:02.311 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:02.338 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:02.446 W ns/e2e-replication-controller-6889 pod/pod-adoption node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:09:02.479 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:02.524 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:02.693 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:02.727 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:02.781 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:02.876 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:02.921 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:03.071 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:03.133 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:03.155 I ns/e2e-emptydir-8126 pod/pod-3a3e77e1-df70-4ac0-9d4e-b1a2f9ba9692 node/ reason/Created
Sep 09 08:09:03.155 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:03.221 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:03.244 I ns/e2e-emptydir-8126 pod/pod-3a3e77e1-df70-4ac0-9d4e-b1a2f9ba9692 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:09:03.245 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:03.339 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:03.395 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:03.421 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:03.478 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:03.572 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:03.677 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:03.693 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:03.721 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:03.756 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:03.786 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:03.855 W ns/e2e-replication-controller-6889 pod/pod-adoption node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:09:03.855 W ns/e2e-replication-controller-6889 pod/pod-adoption node/ostest-5xqm8-worker-0-rzx47 container/pod-adoption reason/NotReady
Sep 09 08:09:03.968 W ns/e2e-replication-controller-6889 pod/pod-adoption node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:09:03.970 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:03.994 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:04.011 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:04.089 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:04.124 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:04.200 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:04.242 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:04.259 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:04.315 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:04.367 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:04.542 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:04.731 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:04.766 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:04.824 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:04.863 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:04.999 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:05.050 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:05.083 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:05.132 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:05.191 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:05.257 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:05.274 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:05.291 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:05.331 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:05.353 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:05.403 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:05.473 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:05.514 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:05.589 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:05.614 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:05.739 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:05.803 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:05.835 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:05.907 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:05.936 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:06.022 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:06.047 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:06.071 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:06.140 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:06.198 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:06.267 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:06.306 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:06.360 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:06.400 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:06.424 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:06.502 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:06.520 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:06.561 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:06.649 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:06.699 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:06.764 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:06.809 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:06.903 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:06.939 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:06.962 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:07.040 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:07.074 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:07.132 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:07.215 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:07.239 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:07.273 W ns/e2e-replication-controller-6889 pod/pod-adoption node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:09:07.326 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:07.361 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:07.420 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:07.483 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:07.511 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:07.568 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:07.592 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:07.627 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:07.671 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:07.697 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:07.755 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:07.769 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:07.790 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:07.818 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:07.843 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:07.864 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:07.883 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:07.917 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:07.954 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:07.982 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:08.068 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:08.114 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:08.167 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:08.193 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:08.221 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:08.334 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:08.357 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:08.363 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:08.425 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:08.442 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:08.492 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:08.507 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:08.530 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:08.582 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:08.627 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:08.659 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:08.682 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:08.700 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:08.730 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:08.740 W ns/e2e-replication-controller-5281 pod/condition-test-lft6n node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:08.752 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:08.766 W ns/e2e-replication-controller-5281 pod/condition-test-mkzpm node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:09:08.829 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:08.866 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:08.902 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:09.033 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:09.060 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:09.221 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:09.263 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:09.298 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:09.476 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:09.523 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:09.740 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:09.813 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:09.822 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:09.938 W ns/e2e-replication-controller-5281 pod/condition-test-mkzpm node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:09:09.941 W ns/e2e-replication-controller-5281 pod/condition-test-lft6n node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:10.147 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:10.178 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:10.476 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:10.668 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:10.716 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:10.858 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:10.969 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:11.273 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:11.313 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:11.366 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:11.493 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:11.606 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:11.698 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:11.742 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:11.765 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:11.877 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:11.907 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:11.964 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:11.979 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:12.008 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:12.030 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:12.047 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:12.108 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:12.144 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:12.179 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:12.199 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:12.218 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:12.264 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:12.288 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:12.294 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:12.339 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:12.372 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:12.442 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:12.489 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:12.554 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:12.559 I ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 reason/AddedInterface Add eth0 [10.128.157.115/23]
Sep 09 08:09:12.759 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:12.806 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:12.888 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:12.933 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:13.011 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:13.036 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:13.060 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:13.109 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:13.143 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:13.156 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:13.189 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:13.213 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:13.256 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:13.271 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:13.307 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:13.331 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:13.367 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:13.373 I ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:09:13.420 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:13.454 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:13.475 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:13.504 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:13.536 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:13.617 I ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Created
Sep 09 08:09:13.664 I ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Started
Sep 09 08:09:13.692 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:13.717 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:13.745 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:13.772 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:13.806 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:13.858 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:13.867 I ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Ready
Sep 09 08:09:13.901 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:13.909 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:13.923 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:13.944 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:13.951 W ns/e2e-emptydir-4509 pod/pod-sharedvolume-3f7cf5a7-3319-4b6a-b059-9773ded444a1 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:13.994 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:14.012 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:14.041 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:14.206 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:14.240 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:14.339 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:14.397 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:14.403 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:14.447 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:14.483 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:14.560 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:14.593 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:14.621 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:14.705 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:14.732 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:14.809 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:14.830 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:14.864 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:14.914 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:14.945 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:15.095 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:15.179 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:15.204 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:15.261 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:15.304 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:15.362 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:15.379 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:15.407 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:15.471 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:15.500 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:15.635 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:15.655 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:15.683 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:15.757 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:15.789 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:15.982 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:16.063 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:16.129 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:16.226 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:16.266 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:16.445 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:16.497 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:16.549 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:16.580 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:16.643 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:16.788 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:16.845 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:16.851 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:16.893 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:16.899 W ns/e2e-emptydir-4509 pod/pod-sharedvolume-3f7cf5a7-3319-4b6a-b059-9773ded444a1 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:16.939 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:17.100 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:17.174 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:17.216 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:17.270 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:17.298 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:17.352 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:17.382 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:17.416 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:17.449 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:17.468 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:17.558 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:17.613 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:17.626 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:17.667 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:17.677 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:17.757 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:17.787 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:17.812 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:17.834 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:17.849 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:17.885 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:17.911 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:17.934 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:17.965 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:17.989 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:18.027 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:18.050 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:18.072 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:18.176 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:18.208 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:18.293 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:18.336 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:18.375 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:18.414 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:18.443 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:18.498 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:18.533 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:18.598 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:18.642 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:18.665 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:18.714 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:18.741 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:18.768 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:18.820 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:18.859 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:18.928 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:18.968 W ns/e2e-gc-7163 pod/simpletest.rc-8qbhk node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:09:18.982 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:19.031 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:19.099 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:19.128 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:19.255 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:19.323 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:19.346 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:19.374 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:19.405 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:19.536 I ns/e2e-var-expansion-8808 pod/var-expansion-acf35875-bf61-4d9e-b0a1-d471889f4a33 reason/AddedInterface Add eth0 [10.128.158.112/23]
Sep 09 08:09:19.616 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:19.646 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:19.680 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:19.720 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:19.746 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:19.807 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:19.858 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:19.916 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:19.997 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:19.997 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:20.150 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:20.168 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:20.214 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:20.214 I ns/e2e-var-expansion-8808 pod/var-expansion-acf35875-bf61-4d9e-b0a1-d471889f4a33 node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:09:20.275 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:20.333 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:20.457 I ns/e2e-container-probe-9159 pod/liveness-aff7576c-81b3-4a99-b90f-d3f475d81615 node/ reason/Created
Sep 09 08:09:20.523 I ns/e2e-var-expansion-8808 pod/var-expansion-acf35875-bf61-4d9e-b0a1-d471889f4a33 node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Created
Sep 09 08:09:20.787 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:20.838 I ns/e2e-container-probe-9159 pod/liveness-aff7576c-81b3-4a99-b90f-d3f475d81615 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:09:20.854 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:20.958 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:21.041 I ns/e2e-var-expansion-8808 pod/var-expansion-acf35875-bf61-4d9e-b0a1-d471889f4a33 node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Started
Sep 09 08:09:21.197 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:21.274 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:21.391 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:21.421 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:21.434 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:21.476 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:21.518 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:21.653 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:21.687 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:21.721 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:21.787 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:21.812 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:21.984 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:22.023 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:22.065 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:22.105 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:22.144 W ns/e2e-var-expansion-8808 pod/var-expansion-acf35875-bf61-4d9e-b0a1-d471889f4a33 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:22.150 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:22.238 W ns/e2e-gc-7163 pod/simpletest.rc-8qbhk node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:22.390 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:22.447 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:22.488 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:22.815 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:22.910 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:23.162 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:23.197 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:23.235 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:23.293 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:23.336 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:23.416 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:23.440 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:23.472 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:23.509 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:23.539 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:23.615 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:23.657 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:23.660 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:23.700 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:23.724 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:23.789 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 04:09:23.808 - 314s  I test="[k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] [sig-node] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" running
Sep 09 08:09:23.828 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:23.832 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:23.892 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:23.938 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:24.027 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:24.079 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:24.106 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:24.170 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:24.199 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:24.250 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:24.251 W ns/e2e-var-expansion-8808 pod/var-expansion-acf35875-bf61-4d9e-b0a1-d471889f4a33 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:24.311 I ns/e2e-container-probe-7604 pod/busybox-9c1528aa-c7d1-4c0a-93bc-0c538464a676 reason/AddedInterface Add eth0 [10.128.162.12/23]
Sep 09 08:09:24.314 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:24.341 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:24.435 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:24.507 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:24.532 I ns/e2e-container-lifecycle-hook-4083 pod/pod-handle-http-request reason/AddedInterface Add eth0 [10.128.177.11/23]
Sep 09 08:09:24.659 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:24.727 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:24.777 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:24.817 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:24.849 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:24.910 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:24.966 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:25.012 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:25.040 I ns/e2e-container-probe-7604 pod/busybox-9c1528aa-c7d1-4c0a-93bc-0c538464a676 node/ostest-5xqm8-worker-0-rzx47 container/busybox reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:09:25.091 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:25.111 I ns/e2e-var-expansion-403 pod/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f node/ reason/Created
Sep 09 08:09:25.172 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:25.220 I ns/e2e-var-expansion-403 pod/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:09:25.319 I ns/e2e-container-probe-7604 pod/busybox-9c1528aa-c7d1-4c0a-93bc-0c538464a676 node/ostest-5xqm8-worker-0-rzx47 container/busybox reason/Created
Sep 09 08:09:25.356 I ns/e2e-container-probe-7604 pod/busybox-9c1528aa-c7d1-4c0a-93bc-0c538464a676 node/ostest-5xqm8-worker-0-rzx47 container/busybox reason/Started
Sep 09 08:09:25.384 I ns/e2e-container-lifecycle-hook-4083 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 container/pod-handle-http-request reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:09:25.439 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:25.469 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:25.537 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:25.630 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:25.658 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:25.765 I ns/e2e-container-lifecycle-hook-4083 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 container/pod-handle-http-request reason/Created
Sep 09 08:09:25.820 I ns/e2e-secrets-6031 pod/pod-secrets-061e9d87-4444-45aa-ad76-912ee73012ef node/ reason/Created
Sep 09 08:09:25.836 I ns/e2e-container-lifecycle-hook-4083 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 container/pod-handle-http-request reason/Started
Sep 09 08:09:25.837 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:25.882 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:25.922 I ns/e2e-secrets-6031 pod/pod-secrets-061e9d87-4444-45aa-ad76-912ee73012ef node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:09:25.942 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:25.958 I ns/e2e-container-probe-7604 pod/busybox-9c1528aa-c7d1-4c0a-93bc-0c538464a676 node/ostest-5xqm8-worker-0-rzx47 container/busybox reason/Ready
Sep 09 08:09:26.096 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:26.119 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:26.190 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:26.227 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:26.259 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:26.298 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:26.322 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:26.369 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:26.391 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:26.424 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:26.458 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:26.496 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:26.527 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:26.566 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:26.591 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:26.630 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:26.665 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:26.697 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:26.723 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:26.749 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:26.769 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:26.777 I ns/e2e-container-lifecycle-hook-4083 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 container/pod-handle-http-request reason/Ready
Sep 09 08:09:26.791 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:26.830 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:26.869 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:26.879 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:26.904 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:26.923 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:26.980 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:27.013 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:27.014 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:27.038 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:27.063 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:27.177 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:27.212 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:27.257 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:27.298 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:27.316 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:27.387 I ns/e2e-container-lifecycle-hook-4083 pod/pod-with-prestop-http-hook node/ reason/Created
Sep 09 08:09:27.415 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:27.488 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:27.489 I ns/e2e-container-lifecycle-hook-4083 pod/pod-with-prestop-http-hook node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:09:27.536 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:27.646 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:27.670 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:27.777 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:27.801 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:27.821 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:27.919 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:27.935 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:28.000 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:28.025 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:28.043 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:28.094 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:28.115 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:28.174 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:28.199 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:28.215 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:28.252 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:28.274 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:28.342 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:28.369 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:28.395 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:28.463 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:28.496 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:28.555 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:28.611 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:28.630 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:28.688 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:28.744 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:28.780 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:28.810 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:28.828 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:28.828 W ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:09:28.877 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:28.901 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:28.942 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:28.961 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:28.981 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:29.017 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:29.032 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:29.090 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:29.150 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:29.157 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:29.184 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:29.205 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:29.278 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:29.339 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:29.374 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:29.428 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:29.449 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:29.522 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:29.563 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:29.590 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:29.661 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:29.679 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:29.777 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:29.802 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:29.825 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:29.881 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:29.914 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:29.961 E ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 node/ostest-5xqm8-worker-0-rzx47 container/client-container init container exited with code 2 (Error): 
Sep 09 08:09:29.961 E ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 08:09:29.961 E ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 node/ostest-5xqm8-worker-0-rzx47 container/client-container container exited with code 2 (Error): 
Sep 09 08:09:30.016 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:30.056 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:30.092 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:30.137 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:30.181 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:30.239 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:30.293 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:30.336 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:30.380 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:30.406 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:30.482 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:30.574 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:30.610 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:30.685 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:30.721 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:30.801 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:30.872 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:30.934 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:31.228 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:31.281 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:31.441 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:31.504 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:31.562 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:31.647 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:31.663 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:31.752 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:31.774 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:31.803 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:31.850 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:31.870 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:31.930 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:31.965 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:31.976 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:32.013 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:32.029 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:32.122 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:32.146 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:32.188 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:32.580 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:32.683 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:32.805 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:32.850 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:32.877 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:32.910 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:32.936 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:33.023 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:33.082 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:33.092 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:33.123 I ns/e2e-projected-113 pod/downwardapi-volume-59e8352a-726c-4c3e-8c80-147c4771fc9a reason/AddedInterface Add eth0 [10.128.165.153/23]
Sep 09 08:09:33.154 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:33.208 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:33.298 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:33.321 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:33.330 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:33.363 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:33.372 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:33.468 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:33.498 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:33.548 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:33.607 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:33.624 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:33.751 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:33.786 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:33.811 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:33.843 I ns/e2e-projected-113 pod/downwardapi-volume-59e8352a-726c-4c3e-8c80-147c4771fc9a node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:09:33.951 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:33.973 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:34.038 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:34.059 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:34.068 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:34.094 I ns/e2e-projected-113 pod/downwardapi-volume-59e8352a-726c-4c3e-8c80-147c4771fc9a node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Created
Sep 09 08:09:34.189 I ns/e2e-projected-113 pod/downwardapi-volume-59e8352a-726c-4c3e-8c80-147c4771fc9a node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Started
Sep 09 08:09:34.190 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:34.228 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:34.322 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:34.339 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:34.362 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:34.398 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:34.425 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:34.550 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:34.599 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:34.641 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:34.715 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:34.746 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:34.814 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:34.858 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:34.881 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:34.925 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:34.944 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:35.046 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:35.113 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:35.117 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:35.188 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:35.215 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:35.277 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:35.323 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:35.347 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:35.391 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:35.399 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:35.507 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:35.580 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:35.580 I ns/e2e-container-lifecycle-hook-4083 pod/pod-with-prestop-http-hook reason/AddedInterface Add eth0 [10.128.177.254/23]
Sep 09 08:09:35.583 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:35.760 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:35.792 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:35.866 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:35.880 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:35.918 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:35.934 W ns/e2e-projected-113 pod/downwardapi-volume-59e8352a-726c-4c3e-8c80-147c4771fc9a node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:09:35.984 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:36.004 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:36.160 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:36.187 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:36.236 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:36.291 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:36.298 I ns/e2e-container-lifecycle-hook-4083 pod/pod-with-prestop-http-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-prestop-http-hook reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:09:36.330 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:36.363 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:36.404 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:36.415 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:36.470 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:36.498 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:36.550 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:36.567 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:36.596 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:36.643 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:36.665 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:36.701 I ns/e2e-container-lifecycle-hook-4083 pod/pod-with-prestop-http-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-prestop-http-hook reason/Created
Sep 09 08:09:36.725 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:36.753 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:36.789 I ns/e2e-container-lifecycle-hook-4083 pod/pod-with-prestop-http-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-prestop-http-hook reason/Started
Sep 09 08:09:36.951 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:37.019 I ns/e2e-container-lifecycle-hook-4083 pod/pod-with-prestop-http-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-prestop-http-hook reason/Ready
Sep 09 08:09:37.032 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:37.054 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:37.192 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:37.237 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:37.247 W ns/e2e-downward-api-8460 pod/labelsupdate1c645955-b668-40b2-a0c9-f131b8be7128 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:09:37.262 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:37.326 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:37.354 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:37.445 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:37.483 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:37.516 W ns/e2e-container-lifecycle-hook-4083 pod/pod-with-prestop-http-hook node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 15s
Sep 09 08:09:37.516 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:37.573 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:37.630 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:37.720 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:37.744 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:37.775 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:37.802 I ns/e2e-statefulset-1701 pod/test-pod reason/AddedInterface Add eth0 [10.128.136.54/23]
Sep 09 08:09:37.808 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:37.835 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:37.838 W ns/e2e-projected-113 pod/downwardapi-volume-59e8352a-726c-4c3e-8c80-147c4771fc9a node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:09:37.935 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:37.964 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:37.977 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:38.016 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:38.040 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:38.094 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:38.137 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:38.142 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:38.208 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:38.236 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:38.271 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:38.301 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:38.310 I ns/e2e-emptydir-8126 pod/pod-3a3e77e1-df70-4ac0-9d4e-b1a2f9ba9692 reason/AddedInterface Add eth0 [10.128.132.116/23]
Sep 09 08:09:38.325 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:38.372 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:38.396 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:38.457 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:38.497 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:38.548 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:38.645 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:38.672 I ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:09:38.708 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:38.766 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:38.819 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:38.854 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:38.864 I ns/e2e-container-lifecycle-hook-4083 pod/pod-with-prestop-http-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-prestop-http-hook reason/Killing
Sep 09 08:09:38.955 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:38.997 I ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:09:38.998 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:39.022 I ns/e2e-emptydir-8126 pod/pod-3a3e77e1-df70-4ac0-9d4e-b1a2f9ba9692 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:09:39.078 I ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:09:39.101 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:39.139 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:39.140 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:39.176 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:39.204 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:39.266 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:39.281 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:39.320 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:39.351 I ns/e2e-emptydir-8126 pod/pod-3a3e77e1-df70-4ac0-9d4e-b1a2f9ba9692 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:09:39.355 I ns/e2e-gc-2001 deployment/simpletest.deployment reason/ScalingReplicaSet Scaled up replica set simpletest.deployment-59cfbf9b4d to 2
Sep 09 08:09:39.400 I ns/e2e-gc-2001 pod/simpletest.deployment-59cfbf9b4d-6hstm node/ reason/Created
Sep 09 08:09:39.416 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:39.423 I ns/e2e-emptydir-8126 pod/pod-3a3e77e1-df70-4ac0-9d4e-b1a2f9ba9692 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:09:39.457 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:39.499 I ns/e2e-gc-2001 replicaset/simpletest.deployment-59cfbf9b4d reason/SuccessfulCreate Created pod: simpletest.deployment-59cfbf9b4d-6hstm
Sep 09 08:09:39.560 I ns/e2e-gc-2001 pod/simpletest.deployment-59cfbf9b4d-6hstm node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:09:39.605 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:39.611 I ns/e2e-gc-2001 pod/simpletest.deployment-59cfbf9b4d-tp92j node/ reason/Created
Sep 09 08:09:39.641 I ns/e2e-gc-2001 replicaset/simpletest.deployment-59cfbf9b4d reason/SuccessfulCreate Created pod: simpletest.deployment-59cfbf9b4d-tp92j
Sep 09 08:09:39.641 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:39.672 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:39.717 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:39.737 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:39.757 I ns/e2e-gc-2001 pod/simpletest.deployment-59cfbf9b4d-tp92j node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:09:39.815 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:39.826 I ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:09:39.856 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:39.994 E ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver container exited with code 1 (Error): 
Sep 09 08:09:40.058 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:40.182 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:40.199 I ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:09:40.218 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:40.239 I ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:09:40.296 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:40.331 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:40.353 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:40.431 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:40.448 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:40.589 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:40.626 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:40.661 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:40.738 W ns/e2e-gc-2001 pod/simpletest.deployment-59cfbf9b4d-tp92j node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:40.751 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:40.762 W ns/e2e-gc-2001 pod/simpletest.deployment-59cfbf9b4d-6hstm node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:09:40.801 W ns/e2e-gc-2001 pod/simpletest.deployment-59cfbf9b4d-tp92j node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:40.816 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:40.832 W ns/e2e-gc-2001 pod/simpletest.deployment-59cfbf9b4d-6hstm node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:09:40.851 W ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 reason/BackOff Back-off restarting failed container
Sep 09 08:09:40.963 W ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Restarted
Sep 09 08:09:40.963 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:40.993 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:41.068 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:41.121 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:41.185 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:41.238 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:41.269 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/NodePorts Predicate NodePorts failed
Sep 09 08:09:41.284 E ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (NodePorts): Pod Predicate NodePorts failed
Sep 09 08:09:41.341 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:41.353 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:41.410 W ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:09:41.439 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Created
Sep 09 08:09:41.841 W ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 reason/BackOff Back-off restarting failed container (2 times)
Sep 09 08:09:41.979 W ns/e2e-emptydir-8126 pod/pod-3a3e77e1-df70-4ac0-9d4e-b1a2f9ba9692 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:09:42.185 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ reason/Created
Sep 09 08:09:42.257 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:09:42.810 W ns/e2e-container-lifecycle-hook-4083 pod/pod-with-prestop-http-hook node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:44.307 W ns/e2e-statefulset-1701 pod/test-pod node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:09:47.508 W ns/e2e-emptydir-8126 pod/pod-3a3e77e1-df70-4ac0-9d4e-b1a2f9ba9692 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:09:48.566 I ns/e2e-statefulset-1701 pod/ss-0 reason/AddedInterface Add eth0 [10.128.136.54/23]
Sep 09 08:09:49.308 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:09:49.601 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:09:49.695 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:09:50.044 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 08:09:50.374 I ns/e2e-webhook-5806 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 08:09:50.446 I ns/e2e-webhook-5806 pod/sample-webhook-deployment-7bc8486f8c-7zvbj node/ reason/Created
Sep 09 08:09:50.473 I ns/e2e-webhook-5806 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-7zvbj
Sep 09 08:09:50.637 I ns/e2e-webhook-5806 pod/sample-webhook-deployment-7bc8486f8c-7zvbj node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:09:50.719 I ns/e2e-container-probe-9159 pod/liveness-aff7576c-81b3-4a99-b90f-d3f475d81615 reason/AddedInterface Add eth0 [10.128.152.98/23]
Sep 09 08:09:51.405 I ns/e2e-container-probe-9159 pod/liveness-aff7576c-81b3-4a99-b90f-d3f475d81615 node/ostest-5xqm8-worker-0-cbbx9 container/liveness reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:09:51.650 I ns/e2e-container-probe-9159 pod/liveness-aff7576c-81b3-4a99-b90f-d3f475d81615 node/ostest-5xqm8-worker-0-cbbx9 container/liveness reason/Created
Sep 09 08:09:51.659 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:09:51.712 I ns/e2e-container-probe-9159 pod/liveness-aff7576c-81b3-4a99-b90f-d3f475d81615 node/ostest-5xqm8-worker-0-cbbx9 container/liveness reason/Started
Sep 09 08:09:52.014 I ns/e2e-container-probe-9159 pod/liveness-aff7576c-81b3-4a99-b90f-d3f475d81615 node/ostest-5xqm8-worker-0-cbbx9 container/liveness reason/Ready
Sep 09 08:09:52.963 I ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Killing
Sep 09 04:09:57.314 - 301s  I test="[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" running
Sep 09 08:09:57.733 W ns/e2e-container-lifecycle-hook-4083 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:09:58.700 I ns/e2e-pod-network-test-3433 pod/netserver-0 node/ reason/Created
Sep 09 08:09:58.759 I ns/e2e-pod-network-test-3433 pod/netserver-1 node/ reason/Created
Sep 09 08:09:58.781 I ns/e2e-pod-network-test-3433 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:09:58.805 I ns/e2e-pod-network-test-3433 pod/netserver-2 node/ reason/Created
Sep 09 08:09:58.825 I ns/e2e-pod-network-test-3433 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:09:58.900 I ns/e2e-pod-network-test-3433 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:09:59.005 W ns/e2e-container-lifecycle-hook-4083 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:09:59.005 W ns/e2e-container-lifecycle-hook-4083 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 container/pod-handle-http-request reason/NotReady
Sep 09 08:09:59.433 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/NotReady
Sep 09 08:09:59.433 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Restarted
Sep 09 08:09:59.782 W clusteroperator/network changed Progressing to True: Deploying: Deployment "openshift-kuryr/kuryr-controller" is not available (awaiting 1 nodes)
Sep 09 08:10:03.968 - 90s   W ns/e2e-container-lifecycle-hook-4083 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 04:10:05.436 I test="[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" failed
Sep 09 08:10:06.823 I ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 node/ reason/Created
Sep 09 08:10:06.864 I ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:10:15.806 W ns/e2e-services-8942 pod/execpodghx5k node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:10:15.835 W ns/e2e-services-8942 pod/externalname-service-jzvt7 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:10:15.861 W ns/e2e-services-8942 pod/externalname-service-x6xw2 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:10:17.160 W ns/e2e-services-8942 pod/execpodghx5k node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:10:17.160 W ns/e2e-services-8942 pod/execpodghx5k node/ostest-5xqm8-worker-0-rzx47 container/agnhost-pause reason/NotReady
Sep 09 08:10:18.893 E ns/e2e-services-8942 pod/externalname-service-x6xw2 node/ostest-5xqm8-worker-0-twrlr container/externalname-service container exited with code 137 (Error): 
Sep 09 08:10:18.969 - 74s   W ns/e2e-services-8942 pod/execpodghx5k node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:10:19.271 W ns/e2e-services-8942 pod/externalname-service-jzvt7 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:10:19.271 W ns/e2e-services-8942 pod/externalname-service-jzvt7 node/ostest-5xqm8-worker-0-rzx47 container/externalname-service reason/NotReady
Sep 09 08:10:25.367 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": dial tcp 10.196.3.65:8091: connect: connection refused (26 times)
Sep 09 08:10:33.968 - 30s   W ns/e2e-secrets-6031 pod/pod-secrets-061e9d87-4444-45aa-ad76-912ee73012ef node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:10:33.968 - 60s   W ns/e2e-services-8942 pod/externalname-service-jzvt7 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:10:33.968 - 240s  W ns/e2e-var-expansion-403 pod/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:10:44.390 I ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 node/ reason/Created
Sep 09 08:10:44.445 I ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:10:48.968 - 254s  W ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:11:03.968 - 45s   W ns/e2e-webhook-5806 pod/sample-webhook-deployment-7bc8486f8c-7zvbj node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:11:03.968 - 45s   W ns/e2e-pod-network-test-3433 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:11:03.968 - 224s  W ns/e2e-pod-network-test-3433 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr pod has been pending longer than a minute
Sep 09 08:11:03.968 - 224s  W ns/e2e-pod-network-test-3433 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:11:09.882 W ns/e2e-statefulset-1701 pod/ss-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:11:12.100 I ns/e2e-secrets-6031 pod/pod-secrets-061e9d87-4444-45aa-ad76-912ee73012ef reason/AddedInterface Add eth0 [10.128.167.148/23]
Sep 09 08:11:12.905 I ns/e2e-secrets-6031 pod/pod-secrets-061e9d87-4444-45aa-ad76-912ee73012ef node/ostest-5xqm8-worker-0-cbbx9 container/secret-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:11:12.914 I ns/e2e-projected-4813 pod/downwardapi-volume-72285237-de98-424b-9776-23c12cbbd0bb node/ reason/Created
Sep 09 08:11:12.955 I ns/e2e-projected-4813 pod/downwardapi-volume-72285237-de98-424b-9776-23c12cbbd0bb node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:11:13.313 I ns/e2e-secrets-6031 pod/pod-secrets-061e9d87-4444-45aa-ad76-912ee73012ef node/ostest-5xqm8-worker-0-cbbx9 container/secret-volume-test reason/Created
Sep 09 08:11:13.389 I ns/e2e-secrets-6031 pod/pod-secrets-061e9d87-4444-45aa-ad76-912ee73012ef node/ostest-5xqm8-worker-0-cbbx9 container/secret-volume-test reason/Started
Sep 09 08:11:14.920 W ns/e2e-secrets-6031 pod/pod-secrets-061e9d87-4444-45aa-ad76-912ee73012ef node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:11:16.424 I ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Ready
Sep 09 08:11:16.798 W clusteroperator/network changed Progressing to False
Sep 09 08:11:17.139 W ns/e2e-secrets-6031 pod/pod-secrets-061e9d87-4444-45aa-ad76-912ee73012ef node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:11:18.968 - 209s  W ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:11:20.351 I ns/e2e-secrets-8147 pod/pod-secrets-446fdde8-267b-4b87-9f05-1d807ace0eba node/ reason/Created
Sep 09 08:11:20.410 I ns/e2e-secrets-8147 pod/pod-secrets-446fdde8-267b-4b87-9f05-1d807ace0eba node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:11:31.647 I ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 reason/AddedInterface Add eth0 [10.128.196.213/23]
Sep 09 08:11:32.212 I ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:11:32.484 I ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:11:32.522 I ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:11:33.432 I ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Ready
Sep 09 08:11:35.706 I ns/e2e-containers-9108 pod/client-containers-6cf29119-99d7-499d-a4fc-f9670c78b819 node/ reason/Created
Sep 09 08:11:35.803 I ns/e2e-containers-9108 pod/client-containers-6cf29119-99d7-499d-a4fc-f9670c78b819 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:11:36.525 W ns/e2e-container-lifecycle-hook-4083 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:11:36.607 W ns/e2e-services-8942 pod/externalname-service-jzvt7 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:11:38.845 W ns/e2e-services-8942 pod/execpodghx5k node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:11:39.182 I ns/e2e-projected-4813 pod/downwardapi-volume-72285237-de98-424b-9776-23c12cbbd0bb reason/AddedInterface Add eth0 [10.128.198.208/23]
Sep 09 08:11:40.004 I ns/e2e-projected-4813 pod/downwardapi-volume-72285237-de98-424b-9776-23c12cbbd0bb node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:11:40.297 I ns/e2e-projected-4813 pod/downwardapi-volume-72285237-de98-424b-9776-23c12cbbd0bb node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Created
Sep 09 08:11:40.713 W ns/e2e-var-expansion-403 pod/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(a2b33079fa187c1976cce9ce485977fa79ed66a7bae3669ceea54edded6eff53): netplugin failed: "2020/09/09 08:09:25 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-var-expansion-403;K8S_POD_NAME=var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f;K8S_POD_INFRA_CONTAINER_ID=a2b33079fa187c1976cce9ce485977fa79ed66a7bae3669ceea54edded6eff53, CNI_NETNS=/var/run/netns/24218965-6cac-4e4d-8b57-bfae577fc86d).\n"
Sep 09 08:11:40.893 I ns/e2e-projected-4813 pod/downwardapi-volume-72285237-de98-424b-9776-23c12cbbd0bb node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Started
Sep 09 08:11:42.323 W ns/e2e-services-8942 pod/externalname-service-x6xw2 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:11:43.329 W ns/e2e-projected-4813 pod/downwardapi-volume-72285237-de98-424b-9776-23c12cbbd0bb node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:11:44.448 W ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 1s
Sep 09 08:11:44.564 W ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47_e2e-dns-2176_c9901b0d-c5b8-403a-b50b-106426cc3544_0(6fd5ab617aa2f755bf0388e9a2cee0802be737e3e86b62ff2c6d297882bfda14): netplugin failed: "2020/09/09 08:09:42 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-dns-2176;K8S_POD_NAME=dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47;K8S_POD_INFRA_CONTAINER_ID=6fd5ab617aa2f755bf0388e9a2cee0802be737e3e86b62ff2c6d297882bfda14, CNI_NETNS=/var/run/netns/2342dbab-9c7d-4df6-9e68-784bdb0d74a6).\n"
Sep 09 08:11:45.176 W ns/e2e-projected-4813 pod/downwardapi-volume-72285237-de98-424b-9776-23c12cbbd0bb node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:11:45.488 E ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 node/ostest-5xqm8-worker-0-rzx47 container/test-container init container exited with code 2 (Error): 
Sep 09 08:11:45.488 E ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 08:11:45.488 E ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 node/ostest-5xqm8-worker-0-rzx47 container/test-container container exited with code 2 (Error): 
Sep 09 08:11:47.431 I ns/e2e-svcaccounts-7048 pod/pod-service-account-defaultsa node/ reason/Created
Sep 09 08:11:47.484 I ns/e2e-svcaccounts-7048 pod/pod-service-account-mountsa node/ reason/Created
Sep 09 08:11:47.507 I ns/e2e-svcaccounts-7048 pod/pod-service-account-defaultsa node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:11:47.584 I ns/e2e-svcaccounts-7048 pod/pod-service-account-nomountsa node/ reason/Created
Sep 09 08:11:47.605 I ns/e2e-svcaccounts-7048 pod/pod-service-account-mountsa node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:11:47.670 I ns/e2e-svcaccounts-7048 pod/pod-service-account-defaultsa-mountspec node/ reason/Created
Sep 09 08:11:47.713 I ns/e2e-svcaccounts-7048 pod/pod-service-account-nomountsa node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:11:47.731 I ns/e2e-svcaccounts-7048 pod/pod-service-account-defaultsa-mountspec node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:11:47.797 I ns/e2e-svcaccounts-7048 pod/pod-service-account-mountsa-mountspec node/ reason/Created
Sep 09 08:11:47.876 I ns/e2e-svcaccounts-7048 pod/pod-service-account-nomountsa-mountspec node/ reason/Created
Sep 09 08:11:47.878 I ns/e2e-svcaccounts-7048 pod/pod-service-account-mountsa-mountspec node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:11:47.952 I ns/e2e-svcaccounts-7048 pod/pod-service-account-defaultsa-nomountspec node/ reason/Created
Sep 09 08:11:48.012 I ns/e2e-svcaccounts-7048 pod/pod-service-account-nomountsa-mountspec node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:11:48.029 I ns/e2e-svcaccounts-7048 pod/pod-service-account-mountsa-nomountspec node/ reason/Created
Sep 09 08:11:48.075 I ns/e2e-svcaccounts-7048 pod/pod-service-account-defaultsa-nomountspec node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:11:48.087 I ns/e2e-svcaccounts-7048 pod/pod-service-account-nomountsa-nomountspec node/ reason/Created
Sep 09 08:11:48.151 I ns/e2e-svcaccounts-7048 pod/pod-service-account-nomountsa-nomountspec node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:11:48.168 I ns/e2e-svcaccounts-7048 pod/pod-service-account-mountsa-nomountspec node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:11:48.282 I ns/e2e-pod-network-test-3433 pod/netserver-1 reason/AddedInterface Add eth0 [10.128.192.78/23]
Sep 09 08:11:48.539 I ns/e2e-secrets-8147 pod/pod-secrets-446fdde8-267b-4b87-9f05-1d807ace0eba reason/AddedInterface Add eth0 [10.128.135.158/23]
Sep 09 08:11:48.993 I ns/e2e-pod-network-test-3433 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:11:49.324 I ns/e2e-secrets-8147 pod/pod-secrets-446fdde8-267b-4b87-9f05-1d807ace0eba node/ostest-5xqm8-worker-0-cbbx9 container/secret-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:11:49.392 I ns/e2e-pod-network-test-3433 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Created
Sep 09 08:11:49.670 I ns/e2e-pod-network-test-3433 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Started
Sep 09 08:11:49.913 I ns/e2e-secrets-8147 pod/pod-secrets-446fdde8-267b-4b87-9f05-1d807ace0eba node/ostest-5xqm8-worker-0-cbbx9 container/secret-volume-test reason/Created
Sep 09 08:11:50.064 I ns/e2e-secrets-8147 pod/pod-secrets-446fdde8-267b-4b87-9f05-1d807ace0eba node/ostest-5xqm8-worker-0-cbbx9 container/secret-volume-test reason/Started
Sep 09 08:11:50.085 I ns/e2e-webhook-5806 pod/sample-webhook-deployment-7bc8486f8c-7zvbj reason/AddedInterface Add eth0 [10.128.138.33/23]
Sep 09 08:11:50.827 W ns/e2e-secrets-8147 pod/pod-secrets-446fdde8-267b-4b87-9f05-1d807ace0eba node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:11:51.033 I ns/e2e-webhook-5806 pod/sample-webhook-deployment-7bc8486f8c-7zvbj node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:11:51.168 I ns/e2e-limitrange-2272 pod/pod-no-resources node/ reason/Created
Sep 09 08:11:51.348 I ns/e2e-webhook-5806 pod/sample-webhook-deployment-7bc8486f8c-7zvbj node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Created
Sep 09 08:11:51.357 W ns/e2e-limitrange-2272 pod/pod-no-resources reason/FailedScheduling 0/6 nodes are available: 6 Insufficient ephemeral-storage.
Sep 09 08:11:51.461 I ns/e2e-webhook-5806 pod/sample-webhook-deployment-7bc8486f8c-7zvbj node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Started
Sep 09 08:11:51.513 I ns/e2e-limitrange-2272 pod/pod-partial-resources node/ reason/Created
Sep 09 08:11:51.562 W ns/e2e-limitrange-2272 pod/pod-no-resources reason/FailedScheduling 0/6 nodes are available: 6 Insufficient ephemeral-storage.
Sep 09 08:11:51.700 W ns/e2e-limitrange-2272 pod/pod-partial-resources reason/FailedScheduling 0/6 nodes are available: 6 Insufficient ephemeral-storage.
Sep 09 08:11:51.812 I ns/e2e-webhook-5806 pod/sample-webhook-deployment-7bc8486f8c-7zvbj node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Ready
Sep 09 08:11:51.833 W ns/e2e-limitrange-2272 pod/pod-partial-resources reason/FailedScheduling 0/6 nodes are available: 6 Insufficient ephemeral-storage.
Sep 09 08:11:53.811 W ns/e2e-limitrange-2272 pod/pod-no-resources reason/FailedScheduling 0/6 nodes are available: 6 Insufficient ephemeral-storage.
Sep 09 08:11:53.921 I ns/e2e-limitrange-2272 pod/pfpod node/ reason/Created
Sep 09 08:11:53.967 W ns/e2e-limitrange-2272 pod/pfpod reason/FailedScheduling 0/6 nodes are available: 6 Insufficient ephemeral-storage.
Sep 09 08:11:54.021 W ns/e2e-limitrange-2272 pod/pfpod reason/FailedScheduling 0/6 nodes are available: 6 Insufficient ephemeral-storage.
Sep 09 08:11:54.316 W ns/e2e-secrets-8147 pod/pod-secrets-446fdde8-267b-4b87-9f05-1d807ace0eba node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:11:54.848 W ns/e2e-limitrange-2272 pod/pod-partial-resources reason/FailedScheduling 0/6 nodes are available: 6 Insufficient ephemeral-storage.
Sep 09 08:11:54.874 W ns/e2e-webhook-5806 pod/sample-webhook-deployment-7bc8486f8c-7zvbj node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:11:56.041 I ns/e2e-emptydir-9505 pod/pod-a16602e4-a243-4f36-97c9-120b563c2a8f node/ reason/Created
Sep 09 08:11:56.091 I ns/e2e-emptydir-9505 pod/pod-a16602e4-a243-4f36-97c9-120b563c2a8f node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:11:56.535 W ns/e2e-webhook-5806 pod/sample-webhook-deployment-7bc8486f8c-7zvbj node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:11:56.535 W ns/e2e-webhook-5806 pod/sample-webhook-deployment-7bc8486f8c-7zvbj node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/NotReady
Sep 09 08:11:56.814 W ns/e2e-limitrange-2272 pod/pfpod reason/FailedScheduling 0/6 nodes are available: 6 Insufficient ephemeral-storage.
Sep 09 08:11:56.893 W ns/e2e-webhook-5806 pod/sample-webhook-deployment-7bc8486f8c-7zvbj node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:11:57.144 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-47zv5 node/ reason/Created
Sep 09 08:11:57.208 I ns/e2e-kubectl-1621 replicationcontroller/update-demo-nautilus reason/SuccessfulCreate Created pod: update-demo-nautilus-47zv5
Sep 09 08:11:57.282 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-47zv5 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:11:57.327 W ns/e2e-containers-8244 pod/client-containers-a5ea4489-6528-4f76-a5c2-c26ef1c58209 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:11:57.364 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-czbj2 node/ reason/Created
Sep 09 08:11:57.426 I ns/e2e-kubectl-1621 replicationcontroller/update-demo-nautilus reason/SuccessfulCreate Created pod: update-demo-nautilus-czbj2
Sep 09 08:11:57.550 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-czbj2 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:11:59.033 I ns/e2e-limitrange-2272 pod/pfpod2 node/ reason/Created
Sep 09 08:11:59.078 W ns/e2e-limitrange-2272 pod/pfpod2 reason/FailedScheduling 0/6 nodes are available: 6 Insufficient ephemeral-storage.
Sep 09 08:12:00.543 I ns/e2e-configmap-7304 pod/pod-configmaps-2b8c89c4-dada-4891-9a02-0592993e8a48 node/ reason/Created
Sep 09 08:12:00.776 I ns/e2e-configmap-7304 pod/pod-configmaps-2b8c89c4-dada-4891-9a02-0592993e8a48 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:12:01.151 I ns/e2e-pod-network-test-3433 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:12:01.293 W ns/e2e-svcaccounts-7048 pod/pod-service-account-defaultsa node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:12:01.322 W ns/e2e-svcaccounts-7048 pod/pod-service-account-defaultsa-mountspec node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:12:01.361 W ns/e2e-svcaccounts-7048 pod/pod-service-account-defaultsa-nomountspec node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:12:01.397 W ns/e2e-svcaccounts-7048 pod/pod-service-account-mountsa node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:12:01.456 W ns/e2e-svcaccounts-7048 pod/pod-service-account-mountsa-mountspec node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:12:01.467 W ns/openshift-operator-lifecycle-manager pod/packageserver-6bb6556b69-jpnn8 node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "https://10.128.5.10:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Sep 09 08:12:01.493 W ns/e2e-svcaccounts-7048 pod/pod-service-account-mountsa-nomountspec node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:12:01.544 W ns/e2e-svcaccounts-7048 pod/pod-service-account-nomountsa node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:12:01.609 W ns/e2e-svcaccounts-7048 pod/pod-service-account-nomountsa-mountspec node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:12:01.659 W ns/e2e-svcaccounts-7048 pod/pod-service-account-nomountsa-nomountspec node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:12:04.674 W ns/e2e-var-expansion-403 pod/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(3ae763538d0abbb1628fc2e8294a4de3fcfc36145752823394cffe2f344db4e7): [e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:12:06.786 W ns/e2e-svcaccounts-7048 pod/pod-service-account-defaultsa-nomountspec node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:12:06.846 W ns/e2e-svcaccounts-7048 pod/pod-service-account-nomountsa node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:12:06.921 W ns/e2e-svcaccounts-7048 pod/pod-service-account-mountsa-mountspec node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:12:07.239 W ns/e2e-svcaccounts-7048 pod/pod-service-account-nomountsa-mountspec node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:12:07.345 W ns/e2e-svcaccounts-7048 pod/pod-service-account-defaultsa node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:12:07.441 W ns/e2e-svcaccounts-7048 pod/pod-service-account-defaultsa-mountspec node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:12:07.547 W ns/e2e-svcaccounts-7048 pod/pod-service-account-mountsa node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:12:08.011 W ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47_e2e-dns-2176_c9901b0d-c5b8-403a-b50b-106426cc3544_0(42ca0204598849b83353caabb13e5281d75dcfbda6140caf5ac8de376a884f06): [e2e-dns-2176/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:12:08.696 W ns/e2e-limitrange-2272 pod/pfpod node/ reason/GracefulDelete in 0s
Sep 09 08:12:08.705 W ns/e2e-limitrange-2272 pod/pfpod node/ reason/Deleted
Sep 09 08:12:08.721 W ns/e2e-limitrange-2272 pod/pfpod2 node/ reason/GracefulDelete in 0s
Sep 09 08:12:08.736 W ns/e2e-limitrange-2272 pod/pfpod2 node/ reason/Deleted
Sep 09 08:12:08.752 W ns/e2e-limitrange-2272 pod/pod-no-resources node/ reason/GracefulDelete in 0s
Sep 09 08:12:08.766 W ns/e2e-limitrange-2272 pod/pod-no-resources node/ reason/Deleted
Sep 09 08:12:08.785 W ns/e2e-limitrange-2272 pod/pod-partial-resources node/ reason/GracefulDelete in 0s
Sep 09 08:12:08.801 W ns/e2e-limitrange-2272 pod/pod-partial-resources node/ reason/Deleted
Sep 09 08:12:12.442 W ns/e2e-svcaccounts-7048 pod/pod-service-account-nomountsa-nomountspec node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:12:12.653 W ns/e2e-svcaccounts-7048 pod/pod-service-account-mountsa-nomountspec node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:12:15.949 I ns/e2e-containers-9108 pod/client-containers-6cf29119-99d7-499d-a4fc-f9670c78b819 reason/AddedInterface Add eth0 [10.128.167.202/23]
Sep 09 08:12:16.631 I ns/e2e-containers-9108 pod/client-containers-6cf29119-99d7-499d-a4fc-f9670c78b819 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:12:16.957 I ns/e2e-containers-9108 pod/client-containers-6cf29119-99d7-499d-a4fc-f9670c78b819 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Created
Sep 09 08:12:17.564 I ns/e2e-containers-9108 pod/client-containers-6cf29119-99d7-499d-a4fc-f9670c78b819 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Started
Sep 09 08:12:18.315 W ns/e2e-containers-9108 pod/client-containers-6cf29119-99d7-499d-a4fc-f9670c78b819 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:12:19.894 W ns/e2e-containers-9108 pod/client-containers-6cf29119-99d7-499d-a4fc-f9670c78b819 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:12:22.557 I ns/e2e-kubelet-test-810 pod/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be node/ reason/Created
Sep 09 08:12:22.805 I ns/e2e-kubelet-test-810 pod/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:12:30.531 W ns/e2e-var-expansion-403 pod/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(d6fde9f5c5cc3b309ed904aab0c50646ae813bc447181a6fb341d57f53bc35bc): [e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:12:30.985 W ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47_e2e-dns-2176_c9901b0d-c5b8-403a-b50b-106426cc3544_0(27081399cb58c2cc7a46b6641a304721e2f881600e7b01939cfa81a5b101a22a): [e2e-dns-2176/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:12:31.300 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-47zv5 reason/AddedInterface Add eth0 [10.128.178.116/23]
Sep 09 08:12:31.838 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-czbj2 reason/AddedInterface Add eth0 [10.128.178.177/23]
Sep 09 08:12:32.013 I ns/e2e-configmap-7304 pod/pod-configmaps-2b8c89c4-dada-4891-9a02-0592993e8a48 reason/AddedInterface Add eth0 [10.128.200.232/23]
Sep 09 08:12:32.036 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-47zv5 node/ostest-5xqm8-worker-0-cbbx9 container/update-demo reason/Pulled image/gcr.io/kubernetes-e2e-test-images/nautilus:1.0
Sep 09 08:12:32.295 I ns/e2e-emptydir-9505 pod/pod-a16602e4-a243-4f36-97c9-120b563c2a8f reason/AddedInterface Add eth0 [10.128.120.181/23]
Sep 09 08:12:32.368 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-47zv5 node/ostest-5xqm8-worker-0-cbbx9 container/update-demo reason/Created
Sep 09 08:12:32.485 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-47zv5 node/ostest-5xqm8-worker-0-cbbx9 container/update-demo reason/Started
Sep 09 08:12:32.779 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-47zv5 node/ostest-5xqm8-worker-0-cbbx9 container/update-demo reason/Ready
Sep 09 08:12:32.919 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-czbj2 node/ostest-5xqm8-worker-0-twrlr container/update-demo reason/Pulled image/gcr.io/kubernetes-e2e-test-images/nautilus:1.0
Sep 09 08:12:33.090 I ns/e2e-emptydir-9505 pod/pod-a16602e4-a243-4f36-97c9-120b563c2a8f node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:12:33.177 I ns/e2e-configmap-7304 pod/pod-configmaps-2b8c89c4-dada-4891-9a02-0592993e8a48 node/ostest-5xqm8-worker-0-twrlr container/configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:12:33.301 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-czbj2 node/ostest-5xqm8-worker-0-twrlr container/update-demo reason/Created
Sep 09 08:12:33.348 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-czbj2 node/ostest-5xqm8-worker-0-twrlr container/update-demo reason/Started
Sep 09 08:12:33.375 I ns/e2e-emptydir-9505 pod/pod-a16602e4-a243-4f36-97c9-120b563c2a8f node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Created
Sep 09 08:12:33.429 I ns/e2e-configmap-7304 pod/pod-configmaps-2b8c89c4-dada-4891-9a02-0592993e8a48 node/ostest-5xqm8-worker-0-twrlr container/configmap-volume-test reason/Created
Sep 09 08:12:33.469 I ns/e2e-emptydir-9505 pod/pod-a16602e4-a243-4f36-97c9-120b563c2a8f node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Started
Sep 09 08:12:33.508 I ns/e2e-configmap-7304 pod/pod-configmaps-2b8c89c4-dada-4891-9a02-0592993e8a48 node/ostest-5xqm8-worker-0-twrlr container/configmap-volume-test reason/Started
Sep 09 08:12:33.879 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-czbj2 node/ostest-5xqm8-worker-0-twrlr container/update-demo reason/Ready
Sep 09 08:12:34.483 W ns/e2e-emptydir-9505 pod/pod-a16602e4-a243-4f36-97c9-120b563c2a8f node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:12:35.021 W ns/e2e-configmap-7304 pod/pod-configmaps-2b8c89c4-dada-4891-9a02-0592993e8a48 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:12:36.277 W ns/e2e-kubectl-1621 pod/update-demo-nautilus-czbj2 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:12:36.340 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-czbj2 node/ostest-5xqm8-worker-0-twrlr container/update-demo reason/Killing
Sep 09 08:12:36.340 I ns/e2e-kubectl-1621 replicationcontroller/update-demo-nautilus reason/SuccessfulDelete Deleted pod: update-demo-nautilus-czbj2
Sep 09 08:12:37.864 W ns/e2e-kubectl-1621 pod/update-demo-nautilus-czbj2 node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:12:37.864 W ns/e2e-kubectl-1621 pod/update-demo-nautilus-czbj2 node/ostest-5xqm8-worker-0-twrlr container/update-demo reason/NotReady
Sep 09 08:12:39.022 W ns/e2e-emptydir-9505 pod/pod-a16602e4-a243-4f36-97c9-120b563c2a8f node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:12:40.200 W ns/e2e-configmap-7304 pod/pod-configmaps-2b8c89c4-dada-4891-9a02-0592993e8a48 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:12:40.275 W ns/e2e-kubectl-1621 pod/update-demo-nautilus-czbj2 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:12:41.794 I ns/e2e-emptydir-5547 pod/pod-0a2b4421-4d77-4954-b61d-b44031375cf8 node/ reason/Created
Sep 09 08:12:41.846 I ns/e2e-emptydir-5547 pod/pod-0a2b4421-4d77-4954-b61d-b44031375cf8 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:12:43.041 I ns/e2e-webhook-6502 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 08:12:43.336 I ns/e2e-webhook-6502 pod/sample-webhook-deployment-7bc8486f8c-bpbwh node/ reason/Created
Sep 09 08:12:43.462 I ns/e2e-webhook-6502 pod/sample-webhook-deployment-7bc8486f8c-bpbwh node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:12:43.474 I ns/e2e-webhook-6502 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-bpbwh
Sep 09 08:12:43.756 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 node/ reason/Created
Sep 09 08:12:43.794 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:12:43.811 I ns/e2e-kubectl-1621 replicationcontroller/update-demo-nautilus reason/SuccessfulCreate Created pod: update-demo-nautilus-9cc79
Sep 09 08:12:45.147 W ns/e2e-pod-network-test-3433 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-0_e2e-pod-network-test-3433_dc6bf87d-66b6-4cc7-a08f-40318ab5a3fd_0(6104dc20a31121771aab79970bf8960df7e69580a40969e8d063460083c71ba7): netplugin failed: "2020/09/09 08:09:59 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-pod-network-test-3433;K8S_POD_NAME=netserver-0;K8S_POD_INFRA_CONTAINER_ID=6104dc20a31121771aab79970bf8960df7e69580a40969e8d063460083c71ba7, CNI_NETNS=/var/run/netns/2e8bffc3-91b6-444d-8aac-f1510f1e9857).\n"
Sep 09 08:12:45.930 W ns/e2e-pod-network-test-3433 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-2_e2e-pod-network-test-3433_af29efb3-3196-498d-b1c9-b7a4f9647a33_0(fd22b10eb38a57a7eee931e0a57a34d7c4b507a478fb5b32bbeda971cd9e71d9): netplugin failed: "2020/09/09 08:09:59 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-pod-network-test-3433;K8S_POD_NAME=netserver-2;K8S_POD_INFRA_CONTAINER_ID=fd22b10eb38a57a7eee931e0a57a34d7c4b507a478fb5b32bbeda971cd9e71d9, CNI_NETNS=/var/run/netns/1afc9e48-6949-4ad9-94ab-977cdaff2e07).\n"
Sep 09 08:12:51.270 W ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780_e2e-projected-7710_4cc38614-8870-43ef-a37f-eaef2e9efad6_0(4da8713b71c7827c9d1a36ae437fcf8d73dd57d51c7ac4f81769f87d2f7cd8c8): netplugin failed: "2020/09/09 08:10:07 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-projected-7710;K8S_POD_NAME=pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780;K8S_POD_INFRA_CONTAINER_ID=4da8713b71c7827c9d1a36ae437fcf8d73dd57d51c7ac4f81769f87d2f7cd8c8, CNI_NETNS=/var/run/netns/b241ee9e-c4ef-4dbc-9b9a-675645fef552).\n"
Sep 09 08:12:53.642 W ns/e2e-var-expansion-403 pod/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(6c71d2593e0bd3ce3d42f3522ea1abeea647ad20db399b1e8ba18e532312619f): [e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:12:53.833 I ns/e2e-kubelet-test-810 pod/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be reason/AddedInterface Add eth0 [10.128.142.239/23]
Sep 09 08:12:54.478 I ns/e2e-kubelet-test-810 pod/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be node/ostest-5xqm8-worker-0-rzx47 container/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:12:54.749 I ns/e2e-kubelet-test-810 pod/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be node/ostest-5xqm8-worker-0-rzx47 container/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be reason/Created
Sep 09 08:12:54.849 I ns/e2e-kubelet-test-810 pod/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be node/ostest-5xqm8-worker-0-rzx47 container/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be reason/Started
Sep 09 08:12:55.713 E ns/e2e-kubelet-test-810 pod/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be node/ostest-5xqm8-worker-0-rzx47 container/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be init container exited with code 1 (Error): 
Sep 09 08:12:55.713 E ns/e2e-kubelet-test-810 pod/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 08:12:55.713 E ns/e2e-kubelet-test-810 pod/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be node/ostest-5xqm8-worker-0-rzx47 container/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be container exited with code 1 (Error): 
Sep 09 08:12:56.904 W ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47_e2e-dns-2176_c9901b0d-c5b8-403a-b50b-106426cc3544_0(06f30c72df5ff57f34d38ee7ff9253733f7d24356499371cd8c3b57719ef21ff): [e2e-dns-2176/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:13:03.578 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ reason/Created
Sep 09 08:13:03.818 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:13:05.938 W ns/e2e-kubelet-test-810 pod/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:13:07.477 W ns/e2e-pod-network-test-3433 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-0_e2e-pod-network-test-3433_dc6bf87d-66b6-4cc7-a08f-40318ab5a3fd_0(3cac33b9f2b03a1350292daa6d9db3cb74ca47c2482b8a9fd8cc6caa63ef2685): [e2e-pod-network-test-3433/netserver-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:13:10.371 W ns/e2e-pod-network-test-3433 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-2_e2e-pod-network-test-3433_af29efb3-3196-498d-b1c9-b7a4f9647a33_0(abde792a67b3769218acc50f9b3239ba1b33955eeefe18cbd3e1c6753766f793): [e2e-pod-network-test-3433/netserver-2:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:13:11.167 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/NotReady
Sep 09 08:13:11.167 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Restarted
Sep 09 08:13:11.378 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500
Sep 09 08:13:11.691 W clusteroperator/network changed Progressing to True: Deploying: Deployment "openshift-kuryr/kuryr-controller" is not available (awaiting 1 nodes)
Sep 09 08:13:12.360 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (24 times)
Sep 09 08:13:14.401 W ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780_e2e-projected-7710_4cc38614-8870-43ef-a37f-eaef2e9efad6_0(958bc28ad2337f9a86d926eef343f2176b3305a9421823074ba20e7d1cc2fe74): [e2e-projected-7710/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:13:18.875 W ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47_e2e-dns-2176_c9901b0d-c5b8-403a-b50b-106426cc3544_0(4925471c48b20eac83648b95e6c3ca5a5ebf717a728597e1e7ba401018bb45c4): [e2e-dns-2176/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:13:19.441 W ns/e2e-var-expansion-403 pod/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(90d9945af927afd73de71b966b7a079697d64c35051f419a065e3d381faeeb86): [e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:13:20.606 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (11 times)
Sep 09 08:13:21.322 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (2 times)
Sep 09 08:13:27.606 W ns/e2e-container-probe-7604 pod/busybox-9c1528aa-c7d1-4c0a-93bc-0c538464a676 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:13:27.635 I ns/e2e-container-probe-7604 pod/busybox-9c1528aa-c7d1-4c0a-93bc-0c538464a676 node/ostest-5xqm8-worker-0-rzx47 container/busybox reason/Killing
Sep 09 08:13:30.094 I ns/e2e-services-6570 pod/pod1 node/ reason/Created
Sep 09 08:13:30.211 I ns/e2e-services-6570 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:13:30.604 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (12 times)
Sep 09 08:13:30.974 E ns/e2e-container-probe-7604 pod/busybox-9c1528aa-c7d1-4c0a-93bc-0c538464a676 node/ostest-5xqm8-worker-0-rzx47 container/busybox container exited with code 137 (Error): 
Sep 09 08:13:31.270 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (3 times)
Sep 09 08:13:32.239 W ns/e2e-pod-network-test-3433 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-2_e2e-pod-network-test-3433_af29efb3-3196-498d-b1c9-b7a4f9647a33_0(e1d61de5bc87436fb8c334f1e60c2fb48606ebc35f1be63c8e81ae7733eadd40): [e2e-pod-network-test-3433/netserver-2:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:13:33.507 W ns/e2e-pod-network-test-3433 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-0_e2e-pod-network-test-3433_dc6bf87d-66b6-4cc7-a08f-40318ab5a3fd_0(3c9511cca6fc8d33c3b5ed8e87962e5114f8e4e2af5eacdf6dc17b43833dd44d): [e2e-pod-network-test-3433/netserver-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:13:36.696 W ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780_e2e-projected-7710_4cc38614-8870-43ef-a37f-eaef2e9efad6_0(368eeb7c55106b75587e5030ef1913e8756bda70d556f19b89e1d3c0636bd385): [e2e-projected-7710/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:13:40.636 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (13 times)
Sep 09 08:13:41.381 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (4 times)
Sep 09 08:13:41.505 W ns/e2e-var-expansion-403 pod/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(9b0621e99c6064b93707337280a8f8f77fa1eae6d67099beb46383b16fce2d91): [e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:13:41.847 W ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47_e2e-dns-2176_c9901b0d-c5b8-403a-b50b-106426cc3544_0(569acd5590a2375de3a7eb01fddaea2c1dea9d5fbb8d3e9d3a6dd412f0399030): [e2e-dns-2176/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:13:48.969 - 59s   W ns/e2e-emptydir-5547 pod/pod-0a2b4421-4d77-4954-b61d-b44031375cf8 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:13:48.969 - 59s   W ns/e2e-webhook-6502 pod/sample-webhook-deployment-7bc8486f8c-bpbwh node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:13:48.969 - 74s   W ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:13:50.598 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (14 times)
Sep 09 08:13:51.267 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (5 times)
Sep 09 08:13:52.952 W ns/e2e-container-probe-9159 pod/liveness-aff7576c-81b3-4a99-b90f-d3f475d81615 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:13:52.990 I ns/e2e-container-probe-9159 pod/liveness-aff7576c-81b3-4a99-b90f-d3f475d81615 node/ostest-5xqm8-worker-0-cbbx9 container/liveness reason/Killing
Sep 09 08:13:54.010 E ns/e2e-container-probe-9159 pod/liveness-aff7576c-81b3-4a99-b90f-d3f475d81615 node/ostest-5xqm8-worker-0-cbbx9 container/liveness container exited with code 2 (Error): 
Sep 09 08:13:55.511 W ns/e2e-pod-network-test-3433 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-0_e2e-pod-network-test-3433_dc6bf87d-66b6-4cc7-a08f-40318ab5a3fd_0(05835d42804922b8d611cdd92ff2d7529978ce454792f81e2f686f158ceb8c5d): [e2e-pod-network-test-3433/netserver-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:13:56.353 W ns/e2e-pod-network-test-3433 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-2_e2e-pod-network-test-3433_af29efb3-3196-498d-b1c9-b7a4f9647a33_0(4b67c3ab223ae416d50eafb927bdf9ae1c69474e88450e6f6912b02893bbde32): [e2e-pod-network-test-3433/netserver-2:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:13:59.419 W ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780_e2e-projected-7710_4cc38614-8870-43ef-a37f-eaef2e9efad6_0(ce837a0463f6281737409a9e9bf11c08277ec9a6dd6772e557fa088897c6ff7a): [e2e-projected-7710/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:14:00.592 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (15 times)
Sep 09 08:14:01.271 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (6 times)
Sep 09 08:14:03.899 W ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47_e2e-dns-2176_c9901b0d-c5b8-403a-b50b-106426cc3544_0(20e2667c228f09c09275d8ee5cfe4f94e4d89d8e3dbb45064c04ef40da96052c): [e2e-dns-2176/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:14:03.968 - 119s  W ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:14:06.369 W ns/e2e-var-expansion-403 pod/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(579cc3452654616a1c5b07ec2fdb48e9219b6b3543774d265e607e719cc4f9fd): [e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:14:10.583 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (16 times)
Sep 09 08:14:11.287 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (7 times)
Sep 09 08:14:11.420 I ns/e2e-projected-4483 pod/pod-projected-configmaps-a7f4dbf0-09b8-496b-9ed5-6cda4ce85687 node/ reason/Created
Sep 09 08:14:11.484 I ns/e2e-projected-4483 pod/pod-projected-configmaps-a7f4dbf0-09b8-496b-9ed5-6cda4ce85687 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:14:20.347 W ns/e2e-pod-network-test-3433 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-2_e2e-pod-network-test-3433_af29efb3-3196-498d-b1c9-b7a4f9647a33_0(4bb0d7f3565d0c22ae15b6c87814eb6c6da52e5cd80bb6c321439b6916f1cd6e): [e2e-pod-network-test-3433/netserver-2:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:14:20.447 W ns/e2e-pod-network-test-3433 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-0_e2e-pod-network-test-3433_dc6bf87d-66b6-4cc7-a08f-40318ab5a3fd_0(bf12c36cbde4a25c70fbcb88a5dc3dfe62a7d49c13eec241426f8462e2dc7f9a): [e2e-pod-network-test-3433/netserver-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:14:20.576 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (17 times)
Sep 09 08:14:21.272 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (8 times)
Sep 09 08:14:22.365 W ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780_e2e-projected-7710_4cc38614-8870-43ef-a37f-eaef2e9efad6_0(7a108d3179c6be305725dba1a177a8a6d223b7a51b4f02312a1f5e9d918e4801): [e2e-projected-7710/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:14:26.253 W ns/e2e-var-expansion-403 pod/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:14:28.655 W ns/e2e-container-probe-9159 pod/liveness-aff7576c-81b3-4a99-b90f-d3f475d81615 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:14:29.831 W ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47_e2e-dns-2176_c9901b0d-c5b8-403a-b50b-106426cc3544_0(830bc64d15c1219d75bcb07de63f758c6b5ca973f61b64451ac22721f010b80d): [e2e-dns-2176/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:14:30.411 W ns/e2e-var-expansion-403 pod/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f_e2e-var-expansion-403_f4864a85-d9f2-4b77-b8f9-be253d4f0c57_0(79931d71452c67ff8a084e02d5c0da9073cfe1ada73a18c19d2742d512789dee): [e2e-var-expansion-403/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:14:30.590 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (18 times)
Sep 09 08:14:31.260 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (9 times)
Sep 09 08:14:33.968 - 75s   W ns/e2e-services-6570 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:14:36.811 W ns/e2e-var-expansion-403 pod/var-expansion-f356b3e3-e228-49ff-975e-41031b23e28f node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:14:37.580 I ns/e2e-projected-4483 pod/pod-projected-configmaps-a7f4dbf0-09b8-496b-9ed5-6cda4ce85687 reason/AddedInterface Add eth0 [10.128.183.120/23]
Sep 09 08:14:38.346 I ns/e2e-projected-4483 pod/pod-projected-configmaps-a7f4dbf0-09b8-496b-9ed5-6cda4ce85687 node/ostest-5xqm8-worker-0-rzx47 container/projected-configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 04:14:38.398 I test="[k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] [sig-node] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" failed
Sep 09 08:14:38.603 I ns/e2e-projected-4483 pod/pod-projected-configmaps-a7f4dbf0-09b8-496b-9ed5-6cda4ce85687 node/ostest-5xqm8-worker-0-rzx47 container/projected-configmap-volume-test reason/Created
Sep 09 08:14:38.702 I ns/e2e-projected-4483 pod/pod-projected-configmaps-a7f4dbf0-09b8-496b-9ed5-6cda4ce85687 node/ostest-5xqm8-worker-0-rzx47 container/projected-configmap-volume-test reason/Started
Sep 09 08:14:39.871 W ns/e2e-projected-4483 pod/pod-projected-configmaps-a7f4dbf0-09b8-496b-9ed5-6cda4ce85687 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:14:40.610 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (19 times)
Sep 09 08:14:41.343 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (10 times)
Sep 09 08:14:41.364 I ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr container/kuryr-cni reason/Killing
Sep 09 08:14:41.770 I ns/e2e-gc-6322 pod/simpletest.rc-cr9bz node/ reason/Created
Sep 09 08:14:41.801 I ns/e2e-gc-6322 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-cr9bz
Sep 09 08:14:41.820 I ns/e2e-gc-6322 pod/simpletest.rc-fqdm8 node/ reason/Created
Sep 09 08:14:41.840 I ns/e2e-gc-6322 pod/simpletest.rc-wl8pk node/ reason/Created
Sep 09 08:14:41.879 I ns/e2e-gc-6322 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-wl8pk
Sep 09 08:14:41.884 I ns/e2e-gc-6322 pod/simpletest.rc-cr9bz node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:14:41.935 I ns/e2e-gc-6322 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-fqdm8
Sep 09 08:14:41.953 I ns/e2e-gc-6322 pod/simpletest.rc-h4bns node/ reason/Created
Sep 09 08:14:41.954 I ns/e2e-gc-6322 pod/simpletest.rc-66vkx node/ reason/Created
Sep 09 08:14:41.957 I ns/e2e-gc-6322 pod/simpletest.rc-h2562 node/ reason/Created
Sep 09 08:14:41.957 I ns/e2e-gc-6322 pod/simpletest.rc-tpng2 node/ reason/Created
Sep 09 08:14:41.978 I ns/e2e-gc-6322 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-66vkx
Sep 09 08:14:41.994 I ns/e2e-gc-6322 pod/simpletest.rc-wl8pk node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:14:41.995 I ns/e2e-gc-6322 pod/simpletest.rc-fqdm8 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:14:42.031 I ns/e2e-gc-6322 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-tpng2
Sep 09 08:14:42.040 I ns/e2e-gc-6322 pod/simpletest.rc-f8725 node/ reason/Created
Sep 09 08:14:42.055 I ns/e2e-gc-6322 pod/simpletest.rc-j95jg node/ reason/Created
Sep 09 08:14:42.071 I ns/e2e-gc-6322 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-h4bns
Sep 09 08:14:42.077 I ns/e2e-gc-6322 pod/simpletest.rc-7gq87 node/ reason/Created
Sep 09 08:14:42.085 I ns/e2e-gc-6322 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-h2562
Sep 09 08:14:42.107 I ns/e2e-gc-6322 pod/simpletest.rc-66vkx node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:14:42.119 I ns/e2e-gc-6322 pod/simpletest.rc-h4bns node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:14:42.134 I ns/e2e-gc-6322 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-f8725
Sep 09 08:14:42.145 I ns/e2e-gc-6322 pod/simpletest.rc-h2562 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:14:42.146 I ns/e2e-gc-6322 pod/simpletest.rc-tpng2 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:14:42.153 I ns/e2e-gc-6322 replicationcontroller/simpletest.rc reason/SuccessfulCreate Created pod: simpletest.rc-j95jg
Sep 09 08:14:42.172 I ns/e2e-gc-6322 pod/simpletest.rc-f8725 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:14:42.208 I ns/e2e-gc-6322 pod/simpletest.rc-7gq87 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:14:42.225 I ns/e2e-gc-6322 replicationcontroller/simpletest.rc reason/SuccessfulCreate (combined from similar events): Created pod: simpletest.rc-7gq87
Sep 09 08:14:42.248 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: Get "http://10.196.3.122:8090/ready": dial tcp 10.196.3.122:8090: connect: connection refused
Sep 09 08:14:42.262 I ns/e2e-gc-6322 pod/simpletest.rc-j95jg node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:14:42.988 W ns/e2e-emptydir-5547 pod/pod-0a2b4421-4d77-4954-b61d-b44031375cf8 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-0a2b4421-4d77-4954-b61d-b44031375cf8_e2e-emptydir-5547_6428776f-0b9d-41d4-8894-7f4c4a8c00fa_0(62e56c51922b02781d231789a8085d34804d9c460f449890d5dc6e7123a4eecb): netplugin failed: "2020/09/09 08:12:42 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-emptydir-5547;K8S_POD_NAME=pod-0a2b4421-4d77-4954-b61d-b44031375cf8;K8S_POD_INFRA_CONTAINER_ID=62e56c51922b02781d231789a8085d34804d9c460f449890d5dc6e7123a4eecb, CNI_NETNS=/var/run/netns/81edb140-4927-49e8-aab9-de25e87c3b66).\n"
Sep 09 08:14:43.809 W ns/e2e-container-probe-7604 pod/busybox-9c1528aa-c7d1-4c0a-93bc-0c538464a676 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:14:43.820 W ns/e2e-kubelet-test-810 pod/bin-false7396d061-d4ab-484d-8bf1-65d13e3af4be node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:14:43.828 W ns/e2e-projected-4483 pod/pod-projected-configmaps-a7f4dbf0-09b8-496b-9ed5-6cda4ce85687 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:14:45.429 W ns/e2e-gc-6322 pod/simpletest.rc-h2562 node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_simpletest.rc-h2562_e2e-gc-6322_25da64b4-1e6e-4c11-bd98-a0610b82a02d_0(7d1ede293062aec32687a620e88b3736c86bc064d4ba03844cd1eb048e1f3bb9): [e2e-gc-6322/simpletest.rc-h2562:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": read tcp 127.0.0.1:33582->127.0.0.1:5036: read: connection reset by peer
Sep 09 08:14:45.521 W ns/e2e-gc-6322 pod/simpletest.rc-wl8pk node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_simpletest.rc-wl8pk_e2e-gc-6322_140fb544-8dce-40ce-b66a-48eb976a495d_0(f67fde338ab347ece9e81656a339eb85447dfea785ac1a2a87ac95a611a14ab0): [e2e-gc-6322/simpletest.rc-wl8pk:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": read tcp 127.0.0.1:33578->127.0.0.1:5036: read: connection reset by peer
Sep 09 08:14:45.633 I ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr container/kuryr-cni reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f5e70df5263a2a421c4e819534a8ed890a331a1306c851cfe749af494180126
Sep 09 08:14:45.663 W ns/e2e-pod-network-test-3433 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-2_e2e-pod-network-test-3433_af29efb3-3196-498d-b1c9-b7a4f9647a33_0(6dae1615824cb4feb73bde1fc96ff5bd5d6e147429e57572e09cf34cf54a0693): [e2e-pod-network-test-3433/netserver-2:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:14:45.712 W ns/e2e-gc-6322 pod/simpletest.rc-f8725 node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_simpletest.rc-f8725_e2e-gc-6322_55478654-a63a-4738-bc05-69660fb17510_0(90efb1608a189316307797efb183ed1e20d2d848c711b73ad1a4395eb1dfd3c2): [e2e-gc-6322/simpletest.rc-f8725:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": read tcp 127.0.0.1:33598->127.0.0.1:5036: read: connection reset by peer
Sep 09 08:14:45.866 I ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr container/kuryr-cni reason/Created
Sep 09 08:14:45.948 I ns/e2e-webhook-6430 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 08:14:46.030 I ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr container/kuryr-cni reason/Started
Sep 09 08:14:46.089 I ns/e2e-webhook-6430 pod/sample-webhook-deployment-7bc8486f8c-9qvp8 node/ reason/Created
Sep 09 08:14:46.190 I ns/e2e-webhook-6430 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-9qvp8
Sep 09 08:14:46.240 I ns/e2e-webhook-6430 pod/sample-webhook-deployment-7bc8486f8c-9qvp8 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:14:46.422 I ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Ready
Sep 09 08:14:46.828 W clusteroperator/network changed Progressing to False
Sep 09 08:14:46.915 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr container/kuryr-cni reason/NotReady
Sep 09 08:14:46.915 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr container/kuryr-cni reason/Restarted
Sep 09 08:14:47.120 W ns/e2e-webhook-6430 pod/sample-webhook-deployment-7bc8486f8c-9qvp8 node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-9qvp8_e2e-webhook-6430_e158c3d1-f002-46e3-af9b-df109b2374ae_0(979129e8fbd36bcc5534018507998da41ca05109678c8120d12b571f4e363ff0): [e2e-webhook-6430/sample-webhook-deployment-7bc8486f8c-9qvp8:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": dial tcp [::1]:5036: connect: connection refused
Sep 09 08:14:47.208 W clusteroperator/network changed Progressing to True: Deploying: DaemonSet "openshift-kuryr/kuryr-cni" is not available (awaiting 1 nodes)
Sep 09 08:14:47.777 W ns/e2e-webhook-6502 pod/sample-webhook-deployment-7bc8486f8c-bpbwh node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-bpbwh_e2e-webhook-6502_1ba95d65-4138-4dd2-975b-be28653303c2_0(5b5abee71ede17c3760857408bd2ec68bd177bfc36e82e7b073db7db85aee598): netplugin failed: "2020/09/09 08:12:43 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-webhook-6502;K8S_POD_NAME=sample-webhook-deployment-7bc8486f8c-bpbwh;K8S_POD_INFRA_CONTAINER_ID=5b5abee71ede17c3760857408bd2ec68bd177bfc36e82e7b073db7db85aee598, CNI_NETNS=/var/run/netns/e8830d0e-7644-4606-9bca-ae3669d64969).\n2020-09-09T08:14:43Z [verbose] Del: e2e-webhook-6502:sample-webhook-deployment-7bc8486f8c-bpbwh:unknownUID:kuryr:eth0 {\"cniVersion\":\"0.3.1\",\"debug\":true,\"kuryr_conf\":\"/etc/kuryr/kuryr.conf\",\"name\":\"kuryr\",\"type\":\"kuryr-cni\"}\n2020/09/09 08:14:43 Calling kuryr-daemon with DEL request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-webhook-6502;K8S_POD_NAME=sample-webhook-deployment-7bc8486f8c-bpbwh;K8S_POD_INFRA_CONTAINER_ID=5b5abee71ede17c3760857408bd2ec68bd177bfc36e82e7b073db7db85aee598, CNI_NETNS=/var/run/netns/e8830d0e-7644-4606-9bca-ae3669d64969).\n"
Sep 09 08:14:47.862 W ns/e2e-gc-6322 pod/simpletest.rc-h4bns node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_simpletest.rc-h4bns_e2e-gc-6322_04e3c625-1b7c-4db7-8945-df20550f8428_0(e39fa86ee92ce4f43a41d322caadbe12fb1d522e35c626a1f58ca0dc7d43045f): [e2e-gc-6322/simpletest.rc-h4bns:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": read tcp 127.0.0.1:59624->127.0.0.1:5036: read: connection reset by peer
Sep 09 08:14:47.922 W ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780_e2e-projected-7710_4cc38614-8870-43ef-a37f-eaef2e9efad6_0(b5cc56359e350502e9cf7fba9c158acb48b268bff7b9b55d5e04e29cad62751e): [e2e-projected-7710/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep 09 08:14:47.950 W ns/e2e-gc-6322 pod/simpletest.rc-tpng2 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_simpletest.rc-tpng2_e2e-gc-6322_2f8b8d16-88a0-44bc-87dd-4a19334b7144_0(27ad92c9d656e6567a7719e4b76cc19f952726b249271164d12740ea24939c02): [e2e-gc-6322/simpletest.rc-tpng2:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": read tcp 127.0.0.1:59628->127.0.0.1:5036: read: connection reset by peer
Sep 09 08:14:47.964 W ns/e2e-gc-6322 pod/simpletest.rc-cr9bz node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_simpletest.rc-cr9bz_e2e-gc-6322_de4a0156-671f-40ba-b8d9-68c828a238d5_0(d6f1edc8edd5de96c7f8a60443156e373abd6e42b26d32cb72b22f862fe5f018): [e2e-gc-6322/simpletest.rc-cr9bz:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:14:47.993 W ns/e2e-services-6570 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod1_e2e-services-6570_66215b32-4334-4df1-a996-e7c0dea52c03_0(a4969571e94ec835f80add9cedc7b045384ad6e4707e30f75f12b4de53b7db90): [e2e-services-6570/pod1:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:14:48.012 W ns/e2e-gc-6322 pod/simpletest.rc-7gq87 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_simpletest.rc-7gq87_e2e-gc-6322_0e8ac72c-ae6f-4ba0-9c0a-a7e0c114aab7_0(95152e880e1208f6edf404cf812097596c0545ceddb571dadcdfbc5f525ddae5): [e2e-gc-6322/simpletest.rc-7gq87:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": read tcp 127.0.0.1:59638->127.0.0.1:5036: read: connection reset by peer
Sep 09 08:14:48.033 W ns/e2e-pod-network-test-3433 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_netserver-0_e2e-pod-network-test-3433_dc6bf87d-66b6-4cc7-a08f-40318ab5a3fd_0(3d27693defd790042597dd794ed7dae8e9a4fd95a3950238588c00f2def507d4): [e2e-pod-network-test-3433/netserver-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:14:49.265 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/NotReady
Sep 09 08:14:49.265 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Restarted
Sep 09 08:14:50.590 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (20 times)
Sep 09 08:14:50.617 I ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/Killing
Sep 09 08:14:51.556 W ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_update-demo-nautilus-9cc79_e2e-kubectl-1621_abb32199-c411-46e3-8b4c-9aa11db8b9e5_0(dc93832184526049819b87183a5bdaf2fcfeae8a0e4ea2bc7f179e25839e7140): netplugin failed: "2020/09/09 08:12:44 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-kubectl-1621;K8S_POD_NAME=update-demo-nautilus-9cc79;K8S_POD_INFRA_CONTAINER_ID=dc93832184526049819b87183a5bdaf2fcfeae8a0e4ea2bc7f179e25839e7140, CNI_NETNS=/var/run/netns/e4dff49e-541d-4fdc-8431-cbe955613e74).\n"
Sep 09 08:14:51.612 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: Get "http://10.196.1.181:8090/ready": dial tcp 10.196.1.181:8090: connect: connection refused (2 times)
Sep 09 08:14:56.388 W ns/e2e-gc-6322 pod/simpletest.rc-66vkx node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_simpletest.rc-66vkx_e2e-gc-6322_1ac0a356-0ee0-421a-bdee-4f3777476da0_0(1facbd4b507e6189b0ebfeca3f5779111347977e8a4582d0110850e96cb0622f): [e2e-gc-6322/simpletest.rc-66vkx:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:14:56.404 W ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47_e2e-dns-2176_c9901b0d-c5b8-403a-b50b-106426cc3544_0(e552720dd62df57acc1b2f4606ecba4f71466e566acbaa55ce48bd6b1c9a438b): [e2e-dns-2176/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:14:56.423 W ns/e2e-gc-6322 pod/simpletest.rc-fqdm8 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_simpletest.rc-fqdm8_e2e-gc-6322_4e4dbfc7-9e7e-4ae0-87df-e3a87a6ef6e8_0(663a9dae485917c7a1d84827cfeafe94c7f245b067a887f3a027a9d796633645): [e2e-gc-6322/simpletest.rc-fqdm8:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:14:56.447 W ns/e2e-gc-6322 pod/simpletest.rc-j95jg node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_simpletest.rc-j95jg_e2e-gc-6322_3374244b-1a56-482d-af28-5871931493d4_0(dc3d610d891b9f49693c8e0fd0bf820101f61ec3a4defeb4f6c4d18f7ae4ec58): [e2e-gc-6322/simpletest.rc-j95jg:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:14:56.461 W ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4_e2e-container-runtime-2767_6f3df5e1-9474-464f-97a4-d830c0249dd3_0(c744467c8ef0d3cd5f4c6eff43ff4d15c00190faac9c2340ae61def82fb620f2): [e2e-container-runtime-2767/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:14:57.173 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/NotReady
Sep 09 08:14:57.173 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/Restarted
Sep 09 08:14:57.647 I ns/e2e-emptydir-5547 pod/pod-0a2b4421-4d77-4954-b61d-b44031375cf8 reason/AddedInterface Add eth0 [10.128.159.190/23]
Sep 09 08:14:58.267 I ns/e2e-emptydir-5547 pod/pod-0a2b4421-4d77-4954-b61d-b44031375cf8 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:14:58.533 I ns/e2e-emptydir-5547 pod/pod-0a2b4421-4d77-4954-b61d-b44031375cf8 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Created
Sep 09 08:14:58.586 I ns/e2e-emptydir-5547 pod/pod-0a2b4421-4d77-4954-b61d-b44031375cf8 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Started
Sep 09 04:14:59.166 I test="[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" failed
Sep 09 08:14:59.569 I ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 reason/AddedInterface Add eth0 [10.128.194.166/23]
Sep 09 08:15:00.353 I ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 node/ostest-5xqm8-worker-0-cbbx9 container/projected-secret-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:15:01.026 I ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 node/ostest-5xqm8-worker-0-cbbx9 container/projected-secret-volume-test reason/Created
Sep 09 08:15:01.084 I ns/e2e-webhook-6502 pod/sample-webhook-deployment-7bc8486f8c-bpbwh reason/AddedInterface Add eth0 [10.128.141.216/23]
Sep 09 08:15:01.134 W ns/e2e-emptydir-5547 pod/pod-0a2b4421-4d77-4954-b61d-b44031375cf8 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:15:01.204 I ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 node/ostest-5xqm8-worker-0-cbbx9 container/projected-secret-volume-test reason/Started
Sep 09 08:15:01.444 W ns/openshift-operator-lifecycle-manager pod/packageserver-6bb6556b69-jpnn8 node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "https://10.128.5.10:5443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (4 times)
Sep 09 08:15:01.773 W ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:15:01.980 I ns/e2e-projected-9920 pod/pod-projected-secrets-f80249e9-055c-4d27-a84c-f570a88c5af5 node/ reason/Created
Sep 09 08:15:02.016 I ns/e2e-projected-9920 pod/pod-projected-secrets-f80249e9-055c-4d27-a84c-f570a88c5af5 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:15:02.094 I ns/e2e-webhook-6502 pod/sample-webhook-deployment-7bc8486f8c-bpbwh node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:15:02.429 I ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr container/kuryr-cni reason/Ready
Sep 09 08:15:02.573 I ns/e2e-webhook-6502 pod/sample-webhook-deployment-7bc8486f8c-bpbwh node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Created
Sep 09 08:15:02.630 I ns/e2e-webhook-6502 pod/sample-webhook-deployment-7bc8486f8c-bpbwh node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Started
Sep 09 08:15:04.010 I ns/e2e-webhook-6502 pod/sample-webhook-deployment-7bc8486f8c-bpbwh node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Ready
Sep 09 08:15:04.864 W ns/e2e-emptydir-5547 pod/pod-0a2b4421-4d77-4954-b61d-b44031375cf8 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:15:05.345 W ns/e2e-projected-7710 pod/pod-projected-secrets-c51e23a5-6ded-4b59-b5e3-7ec1af91e780 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:15:05.822 W ns/e2e-pod-network-test-3433 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:15:05.832 W ns/e2e-pod-network-test-3433 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:15:05.877 W ns/e2e-pod-network-test-3433 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 04:15:05.906 - 206s  I test="[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" running
Sep 09 08:15:05.946 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 reason/AddedInterface Add eth0 [10.128.179.177/23]
Sep 09 08:15:06.804 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 node/ostest-5xqm8-worker-0-rzx47 container/update-demo reason/Pulling image/gcr.io/kubernetes-e2e-test-images/nautilus:1.0
Sep 09 08:15:06.936 I ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Ready
Sep 09 08:15:06.963 W ns/e2e-webhook-6502 pod/sample-webhook-deployment-7bc8486f8c-bpbwh node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:15:07.042 E ns/e2e-pod-network-test-3433 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver container exited with code 2 (Error): 
Sep 09 08:15:07.347 W ns/e2e-pod-network-test-3433 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:15:07.347 W ns/e2e-pod-network-test-3433 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/NotReady
Sep 09 08:15:07.428 W ns/e2e-pod-network-test-3433 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:15:07.465 I ns/e2e-services-7427 pod/affinity-clusterip-p85mp node/ reason/Created
Sep 09 08:15:07.512 I ns/e2e-services-7427 replicationcontroller/affinity-clusterip reason/SuccessfulCreate Created pod: affinity-clusterip-p85mp
Sep 09 08:15:07.512 I ns/e2e-services-7427 pod/affinity-clusterip-p85mp node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:15:07.554 I ns/e2e-services-7427 pod/affinity-clusterip-w4qwk node/ reason/Created
Sep 09 08:15:07.608 I ns/e2e-services-7427 pod/affinity-clusterip-zf6sn node/ reason/Created
Sep 09 08:15:07.623 I ns/e2e-services-7427 replicationcontroller/affinity-clusterip reason/SuccessfulCreate Created pod: affinity-clusterip-w4qwk
Sep 09 08:15:07.685 I ns/e2e-services-7427 replicationcontroller/affinity-clusterip reason/SuccessfulCreate Created pod: affinity-clusterip-zf6sn
Sep 09 08:15:07.736 I ns/e2e-services-7427 pod/affinity-clusterip-w4qwk node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:15:07.736 I ns/e2e-services-7427 pod/affinity-clusterip-zf6sn node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:15:08.432 W ns/e2e-webhook-6502 pod/sample-webhook-deployment-7bc8486f8c-bpbwh node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:15:08.432 W ns/e2e-webhook-6502 pod/sample-webhook-deployment-7bc8486f8c-bpbwh node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/NotReady
Sep 09 08:15:08.484 I ns/e2e-projected-2462 pod/pod-projected-secrets-c72d18c1-c6f7-4e31-85b2-431584e4251b node/ reason/Created
Sep 09 08:15:08.562 I ns/e2e-projected-2462 pod/pod-projected-secrets-c72d18c1-c6f7-4e31-85b2-431584e4251b node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:15:09.012 W ns/e2e-services-7427 pod/affinity-clusterip-zf6sn node/ostest-5xqm8-worker-0-rzx47 reason/FailedMount MountVolume.SetUp failed for volume "default-token-6lslb" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:15:11.356 I ns/e2e-webhook-6430 pod/sample-webhook-deployment-7bc8486f8c-9qvp8 reason/AddedInterface Add eth0 [10.128.152.92/23]
Sep 09 08:15:11.725 I ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/Ready
Sep 09 08:15:11.926 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 node/ostest-5xqm8-worker-0-rzx47 container/update-demo reason/Pulled image/gcr.io/kubernetes-e2e-test-images/nautilus:1.0
Sep 09 08:15:12.031 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 reason/AddedInterface Add eth0 [10.128.126.122/23]
Sep 09 08:15:12.152 W clusteroperator/network changed Progressing to False
Sep 09 08:15:12.228 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 node/ostest-5xqm8-worker-0-rzx47 container/update-demo reason/Created
Sep 09 08:15:12.264 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 node/ostest-5xqm8-worker-0-rzx47 container/update-demo reason/Started
Sep 09 08:15:12.366 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 node/ostest-5xqm8-worker-0-rzx47 container/update-demo reason/Ready
Sep 09 08:15:12.476 W ns/e2e-webhook-6502 pod/sample-webhook-deployment-7bc8486f8c-bpbwh node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:15:12.748 I ns/e2e-gc-6322 pod/simpletest.rc-cr9bz reason/AddedInterface Add eth0 [10.128.190.212/23]
Sep 09 08:15:12.801 I ns/e2e-webhook-6430 pod/sample-webhook-deployment-7bc8486f8c-9qvp8 node/ostest-5xqm8-worker-0-twrlr container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:15:12.934 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:15:13.022 W ns/e2e-pod-network-test-3433 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:15:13.027 I ns/e2e-gc-6322 pod/simpletest.rc-wl8pk reason/AddedInterface Add eth0 [10.128.191.81/23]
Sep 09 08:15:13.119 I ns/e2e-gc-6322 pod/simpletest.rc-fqdm8 reason/AddedInterface Add eth0 [10.128.191.219/23]
Sep 09 08:15:13.196 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Created
Sep 09 08:15:13.294 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Started
Sep 09 08:15:13.330 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/querier reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:15:13.547 I ns/e2e-gc-6322 pod/simpletest.rc-f8725 reason/AddedInterface Add eth0 [10.128.190.149/23]
Sep 09 08:15:13.612 I ns/e2e-gc-6322 pod/simpletest.rc-cr9bz node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Pulled image/docker.io/library/nginx:1.14-alpine
Sep 09 08:15:13.617 I ns/e2e-gc-6322 pod/simpletest.rc-7gq87 reason/AddedInterface Add eth0 [10.128.190.222/23]
Sep 09 08:15:13.661 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/querier reason/Created
Sep 09 08:15:13.669 I ns/e2e-webhook-6430 pod/sample-webhook-deployment-7bc8486f8c-9qvp8 node/ostest-5xqm8-worker-0-twrlr container/sample-webhook reason/Created
Sep 09 08:15:13.692 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/querier reason/Started
Sep 09 08:15:13.762 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier reason/Pulled image/gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0
Sep 09 08:15:13.769 I ns/e2e-gc-6322 pod/simpletest.rc-h4bns reason/AddedInterface Add eth0 [10.128.190.201/23]
Sep 09 08:15:13.799 I ns/e2e-webhook-6430 pod/sample-webhook-deployment-7bc8486f8c-9qvp8 node/ostest-5xqm8-worker-0-twrlr container/sample-webhook reason/Started
Sep 09 08:15:13.929 I ns/e2e-gc-6322 pod/simpletest.rc-cr9bz node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Created
Sep 09 08:15:14.041 I ns/e2e-gc-6322 pod/simpletest.rc-j95jg reason/AddedInterface Add eth0 [10.128.190.130/23]
Sep 09 08:15:14.041 I ns/e2e-gc-6322 pod/simpletest.rc-cr9bz node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Started
Sep 09 08:15:14.041 I ns/e2e-gc-6322 pod/simpletest.rc-fqdm8 node/ostest-5xqm8-worker-0-rzx47 container/nginx reason/Pulling image/docker.io/library/nginx:1.14-alpine
Sep 09 08:15:14.105 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier reason/Created
Sep 09 08:15:14.141 I ns/e2e-gc-6322 pod/simpletest.rc-wl8pk node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Pulling image/docker.io/library/nginx:1.14-alpine
Sep 09 08:15:14.159 W ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:15:14.175 W ns/e2e-kubectl-1621 pod/update-demo-nautilus-47zv5 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:15:14.178 I ns/e2e-gc-6322 pod/simpletest.rc-66vkx reason/AddedInterface Add eth0 [10.128.191.188/23]
Sep 09 08:15:14.205 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-47zv5 node/ostest-5xqm8-worker-0-cbbx9 container/update-demo reason/Killing
Sep 09 08:15:14.212 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier reason/Started
Sep 09 08:15:14.376 I ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 node/ostest-5xqm8-worker-0-rzx47 container/update-demo reason/Killing
Sep 09 08:15:14.455 I ns/e2e-gc-6322 pod/simpletest.rc-f8725 node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Pulling image/docker.io/library/nginx:1.14-alpine
Sep 09 08:15:14.466 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/querier reason/Ready
Sep 09 08:15:14.466 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier reason/Ready
Sep 09 08:15:14.466 I ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:15:14.467 I ns/e2e-gc-6322 pod/simpletest.rc-cr9bz node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Ready
Sep 09 08:15:14.486 I ns/e2e-gc-6322 pod/simpletest.rc-7gq87 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Pulled image/docker.io/library/nginx:1.14-alpine
Sep 09 08:15:14.683 I ns/e2e-gc-6322 pod/simpletest.rc-h4bns node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Pulled image/docker.io/library/nginx:1.14-alpine
Sep 09 08:15:14.816 I ns/e2e-gc-6322 pod/simpletest.rc-j95jg node/ostest-5xqm8-worker-0-rzx47 container/nginx reason/Pulling image/docker.io/library/nginx:1.14-alpine
Sep 09 08:15:14.828 I ns/e2e-gc-6322 pod/simpletest.rc-h2562 reason/AddedInterface Add eth0 [10.128.191.31/23]
Sep 09 08:15:14.932 I ns/e2e-gc-6322 pod/simpletest.rc-7gq87 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Created
Sep 09 08:15:14.984 I ns/e2e-gc-6322 pod/simpletest.rc-66vkx node/ostest-5xqm8-worker-0-rzx47 container/nginx reason/Pulling image/docker.io/library/nginx:1.14-alpine
Sep 09 08:15:15.023 I ns/e2e-gc-6322 pod/simpletest.rc-7gq87 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Started
Sep 09 08:15:15.108 I ns/e2e-gc-6322 pod/simpletest.rc-h4bns node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Created
Sep 09 08:15:15.143 I ns/e2e-webhook-6430 pod/sample-webhook-deployment-7bc8486f8c-9qvp8 node/ostest-5xqm8-worker-0-twrlr container/sample-webhook reason/Ready
Sep 09 08:15:15.175 I ns/e2e-gc-6322 pod/simpletest.rc-h4bns node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Started
Sep 09 08:15:15.531 E ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 node/ostest-5xqm8-worker-0-rzx47 container/update-demo container exited with code 2 (Error): 
Sep 09 08:15:15.680 W ns/e2e-kubectl-1621 pod/update-demo-nautilus-47zv5 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:15:15.680 W ns/e2e-kubectl-1621 pod/update-demo-nautilus-47zv5 node/ostest-5xqm8-worker-0-cbbx9 container/update-demo reason/NotReady
Sep 09 08:15:15.718 I ns/e2e-gc-6322 pod/simpletest.rc-h2562 node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Pulling image/docker.io/library/nginx:1.14-alpine
Sep 09 08:15:15.824 I ns/e2e-gc-6322 pod/simpletest.rc-7gq87 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Ready
Sep 09 08:15:16.055 I ns/e2e-gc-6322 pod/simpletest.rc-h4bns node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Ready
Sep 09 08:15:16.344 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (112 times)
Sep 09 08:15:16.450 I ns/e2e-emptydir-wrapper-6204 pod/pod-secrets-0b0d8a6d-5c7a-4ab5-90cc-62ec60b30d2e node/ reason/Created
Sep 09 08:15:16.494 W ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:15:16.668 I ns/e2e-emptydir-wrapper-6204 pod/pod-secrets-0b0d8a6d-5c7a-4ab5-90cc-62ec60b30d2e node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:15:16.961 W ns/e2e-pod-network-test-3433 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:15:17.328 W ns/e2e-pod-network-test-3433 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:15:17.906 I ns/e2e-container-runtime-2861 pod/termination-message-container96873204-a9fa-42bb-999e-d8e4415101fb node/ reason/Created
Sep 09 08:15:18.009 I ns/e2e-container-runtime-2861 pod/termination-message-container96873204-a9fa-42bb-999e-d8e4415101fb node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:15:18.972 W ns/e2e-kubectl-1621 pod/update-demo-nautilus-47zv5 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:15:19.608 W ns/e2e-webhook-6430 pod/sample-webhook-deployment-7bc8486f8c-9qvp8 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:15:20.508 E ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/jessie-querier container exited with code 137 (Error): 
Sep 09 08:15:20.508 E ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/querier container exited with code 137 (Error): 
Sep 09 08:15:20.508 E ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 container/webserver container exited with code 2 (Error): 
Sep 09 08:15:21.316 W ns/e2e-webhook-6430 pod/sample-webhook-deployment-7bc8486f8c-9qvp8 node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:15:21.316 W ns/e2e-webhook-6430 pod/sample-webhook-deployment-7bc8486f8c-9qvp8 node/ostest-5xqm8-worker-0-twrlr container/sample-webhook reason/NotReady
Sep 09 08:15:21.698 W ns/e2e-dns-2176 pod/dns-test-a74e3566-c25a-41c0-b3e2-59a1cb31de47 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:15:21.866 W ns/e2e-gc-6322 pod/simpletest.rc-66vkx node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:15:21.899 W ns/e2e-gc-6322 pod/simpletest.rc-7gq87 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:15:21.946 W ns/e2e-gc-6322 pod/simpletest.rc-cr9bz node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:15:21.977 W ns/e2e-gc-6322 pod/simpletest.rc-f8725 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:15:22.003 W ns/e2e-gc-6322 pod/simpletest.rc-fqdm8 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:15:22.025 I ns/e2e-gc-6322 pod/simpletest.rc-7gq87 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Killing
Sep 09 08:15:22.047 W ns/e2e-gc-6322 pod/simpletest.rc-h2562 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:15:22.087 W ns/e2e-gc-6322 pod/simpletest.rc-h4bns node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:15:22.108 W ns/e2e-gc-6322 pod/simpletest.rc-j95jg node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:15:22.128 W ns/e2e-gc-6322 pod/simpletest.rc-tpng2 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:15:22.208 W ns/e2e-gc-6322 pod/simpletest.rc-wl8pk node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:15:22.280 I ns/e2e-gc-6322 pod/simpletest.rc-cr9bz node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Killing
Sep 09 08:15:22.363 I ns/e2e-gc-6322 pod/simpletest.rc-h4bns node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Killing
Sep 09 08:15:23.618 W ns/e2e-gc-6322 pod/simpletest.rc-7gq87 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:15:23.618 W ns/e2e-gc-6322 pod/simpletest.rc-7gq87 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/NotReady
Sep 09 08:15:23.765 W ns/e2e-gc-6322 pod/simpletest.rc-h4bns node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:15:23.765 W ns/e2e-gc-6322 pod/simpletest.rc-h4bns node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/NotReady
Sep 09 08:15:23.794 I ns/e2e-downward-api-3376 pod/downward-api-e981fc9b-fe94-4a00-b5d2-ac0c860a4323 node/ reason/Created
Sep 09 08:15:23.833 W ns/e2e-gc-6322 pod/simpletest.rc-cr9bz node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:15:23.833 W ns/e2e-gc-6322 pod/simpletest.rc-cr9bz node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/NotReady
Sep 09 08:15:23.932 I ns/e2e-downward-api-3376 pod/downward-api-e981fc9b-fe94-4a00-b5d2-ac0c860a4323 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:15:26.224 W ns/e2e-webhook-6430 pod/sample-webhook-deployment-7bc8486f8c-9qvp8 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:15:27.426 W ns/e2e-kubectl-1621 pod/update-demo-nautilus-9cc79 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:15:27.464 W ns/e2e-kubectl-1621 pod/update-demo-nautilus-47zv5 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:15:35.800 W ns/e2e-gc-6322 pod/simpletest.rc-wl8pk node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:15:35.812 W ns/e2e-gc-6322 pod/simpletest.rc-fqdm8 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:15:35.812 W ns/e2e-gc-6322 pod/simpletest.rc-7gq87 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:15:35.830 W ns/e2e-gc-6322 pod/simpletest.rc-f8725 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:15:35.832 W ns/e2e-gc-6322 pod/simpletest.rc-h4bns node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:15:35.999 W ns/e2e-gc-6322 pod/simpletest.rc-j95jg node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:15:36.319 W ns/e2e-gc-6322 pod/simpletest.rc-h2562 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:15:36.328 W ns/e2e-gc-6322 pod/simpletest.rc-66vkx node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:15:36.336 W ns/e2e-gc-6322 pod/simpletest.rc-cr9bz node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:15:40.196 I ns/e2e-projected-9920 pod/pod-projected-secrets-f80249e9-055c-4d27-a84c-f570a88c5af5 reason/AddedInterface Add eth0 [10.128.157.81/23]
Sep 09 08:15:41.090 I ns/e2e-projected-9920 pod/pod-projected-secrets-f80249e9-055c-4d27-a84c-f570a88c5af5 node/ostest-5xqm8-worker-0-cbbx9 container/projected-secret-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:15:41.283 I ns/e2e-projected-9920 pod/pod-projected-secrets-f80249e9-055c-4d27-a84c-f570a88c5af5 node/ostest-5xqm8-worker-0-cbbx9 container/projected-secret-volume-test reason/Created
Sep 09 08:15:41.348 I ns/e2e-projected-9920 pod/pod-projected-secrets-f80249e9-055c-4d27-a84c-f570a88c5af5 node/ostest-5xqm8-worker-0-cbbx9 container/projected-secret-volume-test reason/Started
Sep 09 08:15:43.210 W ns/e2e-projected-9920 pod/pod-projected-secrets-f80249e9-055c-4d27-a84c-f570a88c5af5 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:15:44.763 W ns/e2e-projected-9920 pod/pod-projected-secrets-f80249e9-055c-4d27-a84c-f570a88c5af5 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:15:47.693 I ns/e2e-services-7427 pod/affinity-clusterip-w4qwk reason/AddedInterface Add eth0 [10.128.167.167/23]
Sep 09 08:15:48.416 I ns/e2e-services-7427 pod/affinity-clusterip-w4qwk node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:15:48.522 I ns/e2e-services-7427 pod/affinity-clusterip-zf6sn reason/AddedInterface Add eth0 [10.128.166.169/23]
Sep 09 08:15:48.752 I ns/e2e-services-7427 pod/affinity-clusterip-w4qwk node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip reason/Created
Sep 09 08:15:48.833 I ns/e2e-services-7427 pod/affinity-clusterip-w4qwk node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip reason/Started
Sep 09 08:15:48.968 - 14s   W ns/e2e-gc-6322 pod/simpletest.rc-tpng2 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:15:49.316 I ns/e2e-services-7427 pod/affinity-clusterip-zf6sn node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:15:49.651 I ns/e2e-services-7427 pod/affinity-clusterip-zf6sn node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip reason/Created
Sep 09 08:15:49.852 I ns/e2e-services-7427 pod/affinity-clusterip-w4qwk node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip reason/Ready
Sep 09 08:15:49.879 I ns/e2e-services-7427 pod/affinity-clusterip-zf6sn node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip reason/Started
Sep 09 08:15:49.989 I ns/e2e-services-7427 pod/affinity-clusterip-zf6sn node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip reason/Ready
Sep 09 08:15:50.331 W ns/e2e-services-7427 endpoints/affinity-clusterip reason/FailedToUpdateEndpoint Failed to update endpoint e2e-services-7427/affinity-clusterip: Operation cannot be fulfilled on endpoints "affinity-clusterip": the object has been modified; please apply your changes to the latest version and try again
Sep 09 08:15:52.654 I ns/e2e-services-7427 pod/affinity-clusterip-p85mp reason/AddedInterface Add eth0 [10.128.166.65/23]
Sep 09 08:15:52.977 I ns/e2e-services-6570 pod/pod1 reason/AddedInterface Add eth0 [10.128.165.63/23]
Sep 09 08:15:53.565 I ns/e2e-services-7427 pod/affinity-clusterip-p85mp node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:15:53.846 I ns/e2e-services-6570 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 container/pause reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:15:54.011 I ns/e2e-services-7427 pod/affinity-clusterip-p85mp node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip reason/Created
Sep 09 08:15:54.116 I ns/e2e-services-7427 pod/affinity-clusterip-p85mp node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip reason/Started
Sep 09 08:15:54.152 I ns/e2e-services-6570 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 container/pause reason/Created
Sep 09 08:15:54.221 I ns/e2e-services-6570 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 container/pause reason/Started
Sep 09 08:15:54.768 I ns/e2e-services-6570 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 container/pause reason/Ready
Sep 09 08:15:54.848 I ns/e2e-services-7427 pod/affinity-clusterip-p85mp node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip reason/Ready
Sep 09 08:15:55.206 I ns/e2e-services-6570 pod/pod2 node/ reason/Created
Sep 09 08:15:55.368 I ns/e2e-services-6570 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:15:55.606 I ns/e2e-services-7427 pod/execpod-affinityd4ssh node/ reason/Created
Sep 09 08:15:55.731 I ns/e2e-services-7427 pod/execpod-affinityd4ssh node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:15:56.641 I ns/e2e-emptydir-wrapper-6204 pod/pod-secrets-0b0d8a6d-5c7a-4ab5-90cc-62ec60b30d2e reason/AddedInterface Add eth0 [10.128.206.17/23]
Sep 09 08:15:57.357 I ns/e2e-emptydir-wrapper-6204 pod/pod-secrets-0b0d8a6d-5c7a-4ab5-90cc-62ec60b30d2e node/ostest-5xqm8-worker-0-cbbx9 container/secret-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:15:57.669 I ns/e2e-emptydir-wrapper-6204 pod/pod-secrets-0b0d8a6d-5c7a-4ab5-90cc-62ec60b30d2e node/ostest-5xqm8-worker-0-cbbx9 container/secret-test reason/Created
Sep 09 08:15:57.762 I ns/e2e-emptydir-wrapper-6204 pod/pod-secrets-0b0d8a6d-5c7a-4ab5-90cc-62ec60b30d2e node/ostest-5xqm8-worker-0-cbbx9 container/secret-test reason/Started
Sep 09 08:15:57.862 I ns/e2e-emptydir-wrapper-6204 pod/pod-secrets-0b0d8a6d-5c7a-4ab5-90cc-62ec60b30d2e node/ostest-5xqm8-worker-0-cbbx9 container/secret-test reason/Ready
Sep 09 08:15:58.742 W ns/e2e-emptydir-wrapper-6204 pod/pod-secrets-0b0d8a6d-5c7a-4ab5-90cc-62ec60b30d2e node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:15:59.964 I ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ reason/Created
Sep 09 08:16:00.055 I ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:16:00.788 E ns/e2e-emptydir-wrapper-6204 pod/pod-secrets-0b0d8a6d-5c7a-4ab5-90cc-62ec60b30d2e node/ostest-5xqm8-worker-0-cbbx9 container/secret-test container exited with code 2 (Error): 
Sep 09 08:16:05.055 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 reason/AddedInterface Add eth0 [10.128.133.98/23]
Sep 09 08:16:05.343 I ns/e2e-container-runtime-2861 pod/termination-message-container96873204-a9fa-42bb-999e-d8e4415101fb reason/AddedInterface Add eth0 [10.128.205.235/23]
Sep 09 08:16:05.635 W ns/e2e-emptydir-wrapper-6204 pod/pod-secrets-0b0d8a6d-5c7a-4ab5-90cc-62ec60b30d2e node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:16:05.734 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:16:06.003 I ns/e2e-container-runtime-2861 pod/termination-message-container96873204-a9fa-42bb-999e-d8e4415101fb node/ostest-5xqm8-worker-0-cbbx9 container/termination-message-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:16:06.059 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa reason/Created
Sep 09 08:16:06.063 I ns/e2e-projected-2462 pod/pod-projected-secrets-c72d18c1-c6f7-4e31-85b2-431584e4251b reason/AddedInterface Add eth0 [10.128.186.210/23]
Sep 09 08:16:06.139 I ns/e2e-downward-api-3376 pod/downward-api-e981fc9b-fe94-4a00-b5d2-ac0c860a4323 reason/AddedInterface Add eth0 [10.128.148.151/23]
Sep 09 08:16:06.181 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa reason/Started
Sep 09 08:16:06.352 I ns/e2e-container-runtime-2861 pod/termination-message-container96873204-a9fa-42bb-999e-d8e4415101fb node/ostest-5xqm8-worker-0-cbbx9 container/termination-message-container reason/Created
Sep 09 08:16:06.474 I ns/e2e-container-runtime-2861 pod/termination-message-container96873204-a9fa-42bb-999e-d8e4415101fb node/ostest-5xqm8-worker-0-cbbx9 container/termination-message-container reason/Started
Sep 09 08:16:06.656 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:16:06.723 E ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa container exited with code 1 (Error): 
Sep 09 08:16:06.836 I ns/e2e-downward-api-3376 pod/downward-api-e981fc9b-fe94-4a00-b5d2-ac0c860a4323 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:16:06.958 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa reason/Created
Sep 09 08:16:07.003 W ns/e2e-container-runtime-2861 pod/termination-message-container96873204-a9fa-42bb-999e-d8e4415101fb node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:16:07.033 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa reason/Started
Sep 09 08:16:07.098 I ns/e2e-downward-api-3376 pod/downward-api-e981fc9b-fe94-4a00-b5d2-ac0c860a4323 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Created
Sep 09 08:16:07.174 I ns/e2e-downward-api-3376 pod/downward-api-e981fc9b-fe94-4a00-b5d2-ac0c860a4323 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Started
Sep 09 08:16:07.325 I ns/e2e-projected-2462 pod/pod-projected-secrets-c72d18c1-c6f7-4e31-85b2-431584e4251b node/ostest-5xqm8-worker-0-twrlr container/secret-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:16:07.671 W ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 reason/BackOff Back-off restarting failed container
Sep 09 08:16:07.707 I ns/e2e-projected-2462 pod/pod-projected-secrets-c72d18c1-c6f7-4e31-85b2-431584e4251b node/ostest-5xqm8-worker-0-twrlr container/secret-volume-test reason/Created
Sep 09 08:16:07.849 I ns/e2e-projected-2462 pod/pod-projected-secrets-c72d18c1-c6f7-4e31-85b2-431584e4251b node/ostest-5xqm8-worker-0-twrlr container/secret-volume-test reason/Started
Sep 09 08:16:07.852 W ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa reason/Restarted
Sep 09 08:16:08.377 W ns/e2e-downward-api-3376 pod/downward-api-e981fc9b-fe94-4a00-b5d2-ac0c860a4323 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:16:08.691 W ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 reason/BackOff Back-off restarting failed container (2 times)
Sep 09 08:16:08.782 I ns/e2e-services-7427 pod/execpod-affinityd4ssh reason/AddedInterface Add eth0 [10.128.167.104/23]
Sep 09 08:16:08.970 I ns/e2e-webhook-6747 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 08:16:09.066 I ns/e2e-webhook-6747 pod/sample-webhook-deployment-7bc8486f8c-54m6z node/ reason/Created
Sep 09 08:16:09.100 I ns/e2e-webhook-6747 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-54m6z
Sep 09 08:16:09.145 I ns/e2e-webhook-6747 pod/sample-webhook-deployment-7bc8486f8c-54m6z node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:16:09.359 W ns/e2e-projected-2462 pod/pod-projected-secrets-c72d18c1-c6f7-4e31-85b2-431584e4251b node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:16:09.441 I ns/e2e-services-7427 pod/execpod-affinityd4ssh node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:16:09.731 I ns/e2e-services-7427 pod/execpod-affinityd4ssh node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/Created
Sep 09 08:16:09.791 I ns/e2e-services-7427 pod/execpod-affinityd4ssh node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/Started
Sep 09 08:16:09.994 I ns/e2e-services-7427 pod/execpod-affinityd4ssh node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/Ready
Sep 09 08:16:10.488 W ns/e2e-downward-api-3376 pod/downward-api-e981fc9b-fe94-4a00-b5d2-ac0c860a4323 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:16:10.916 W ns/e2e-container-runtime-2861 pod/termination-message-container96873204-a9fa-42bb-999e-d8e4415101fb node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:16:11.522 W ns/e2e-projected-2462 pod/pod-projected-secrets-c72d18c1-c6f7-4e31-85b2-431584e4251b node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:16:12.106 W ns/e2e-gc-6322 pod/simpletest.rc-tpng2 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:16:13.639 I ns/e2e-downward-api-6277 pod/downwardapi-volume-590bc9a1-91b4-4ff1-b7e7-429072c5a2d3 node/ reason/Created
Sep 09 08:16:13.701 I ns/e2e-downward-api-6277 pod/downwardapi-volume-590bc9a1-91b4-4ff1-b7e7-429072c5a2d3 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:16:14.645 I ns/e2e-kubelet-test-4370 pod/bin-false7bb203cf-10c4-4ff4-8965-ef8e8f8fdf04 node/ reason/Created
Sep 09 08:16:14.668 W ns/e2e-kubelet-test-4370 pod/bin-false7bb203cf-10c4-4ff4-8965-ef8e8f8fdf04 node/ reason/GracefulDelete in 0s
Sep 09 08:16:14.684 W ns/e2e-kubelet-test-4370 pod/bin-false7bb203cf-10c4-4ff4-8965-ef8e8f8fdf04 reason/FailedScheduling skip schedule deleting pod: e2e-kubelet-test-4370/bin-false7bb203cf-10c4-4ff4-8965-ef8e8f8fdf04
Sep 09 08:16:14.684 W ns/e2e-kubelet-test-4370 pod/bin-false7bb203cf-10c4-4ff4-8965-ef8e8f8fdf04 node/ reason/Deleted
Sep 09 08:16:14.725 W ns/e2e-kubelet-test-4370 pod/bin-false7bb203cf-10c4-4ff4-8965-ef8e8f8fdf04 reason/FailedScheduling Binding rejected: plugin "DefaultBinder" failed to bind pod "e2e-kubelet-test-4370/bin-false7bb203cf-10c4-4ff4-8965-ef8e8f8fdf04": pods "bin-false7bb203cf-10c4-4ff4-8965-ef8e8f8fdf04" not found
Sep 09 08:16:16.108 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ reason/Created
Sep 09 08:16:16.266 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:16:20.325 I ns/e2e-services-6570 pod/pod2 reason/AddedInterface Add eth0 [10.128.164.74/23]
Sep 09 08:16:21.004 I ns/e2e-services-6570 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:16:21.273 I ns/e2e-services-6570 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Created
Sep 09 08:16:21.338 I ns/e2e-services-6570 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Started
Sep 09 08:16:21.752 I ns/e2e-services-6570 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Ready
Sep 09 08:16:22.290 W ns/e2e-services-6570 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:16:22.322 I ns/e2e-services-6570 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 container/pause reason/Killing
Sep 09 08:16:23.199 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:16:23.368 W ns/e2e-services-6570 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:16:23.637 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa reason/Created
Sep 09 08:16:23.637 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa reason/Started
Sep 09 08:16:23.754 I ns/e2e-services-6570 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Killing
Sep 09 08:16:23.811 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa reason/Ready
Sep 09 08:16:23.811 W ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa reason/Restarted
Sep 09 08:16:23.960 W ns/e2e-services-6570 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:16:23.960 W ns/e2e-services-6570 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 container/pause reason/NotReady
Sep 09 08:16:24.048 W ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:16:24.236 I ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff reason/AddedInterface Add eth0 [10.128.123.168/23]
Sep 09 08:16:24.730 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa reason/Killing
Sep 09 08:16:24.813 E ns/e2e-services-6570 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 container/pause container exited with code 2 (Error): 
Sep 09 08:16:25.061 W ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 reason/Failed Error: error SubPath `/tmp` must not be an absolute path
Sep 09 08:16:25.070 I ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:16:25.938 W ns/e2e-services-6570 pod/pod1 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:16:25.946 W ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 reason/Failed Error: error SubPath `/tmp` must not be an absolute path (2 times)
Sep 09 08:16:25.990 I ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:16:26.331 I ns/e2e-statefulset-5052 pod/ss2-0 node/ reason/Created
Sep 09 08:16:26.375 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulCreate create Pod ss2-0 in StatefulSet ss2 successful
Sep 09 08:16:26.375 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:16:26.965 I ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:16:26.993 W ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 reason/Failed Error: error SubPath `/tmp` must not be an absolute path (3 times)
Sep 09 08:16:27.146 W ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:16:27.210 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpa778f9d50-710b-4b6e-b738-c21dd210def4 node/ostest-5xqm8-worker-0-rzx47 container/terminate-cmd-rpa reason/Killing
Sep 09 08:16:27.464 W ns/e2e-services-6570 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:16:28.206 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpofc9244bfd-6b10-4cbd-aca8-407baf547bd5 node/ reason/Created
Sep 09 08:16:28.351 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpofc9244bfd-6b10-4cbd-aca8-407baf547bd5 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:16:39.754 W ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 reason/Failed Error: error SubPath `/tmp` must not be an absolute path (4 times)
Sep 09 08:16:39.777 I ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:16:43.250 I ns/e2e-webhook-6747 pod/sample-webhook-deployment-7bc8486f8c-54m6z reason/AddedInterface Add eth0 [10.128.143.169/23]
Sep 09 08:16:44.031 I ns/e2e-webhook-6747 pod/sample-webhook-deployment-7bc8486f8c-54m6z node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:16:44.331 I ns/e2e-webhook-6747 pod/sample-webhook-deployment-7bc8486f8c-54m6z node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Created
Sep 09 08:16:44.399 I ns/e2e-webhook-6747 pod/sample-webhook-deployment-7bc8486f8c-54m6z node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Started
Sep 09 08:16:45.428 I ns/e2e-webhook-6747 pod/sample-webhook-deployment-7bc8486f8c-54m6z node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/Ready
Sep 09 08:16:48.270 I ns/e2e-webhook-6747 pod/to-be-attached-pod node/ reason/Created
Sep 09 08:16:48.309 I ns/e2e-webhook-6747 pod/to-be-attached-pod node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:16:48.593 I ns/e2e-var-expansion-5559 pod/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326 node/ reason/Created
Sep 09 08:16:48.819 I ns/e2e-var-expansion-5559 pod/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:16:49.093 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/NotReady
Sep 09 08:16:49.093 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Restarted
Sep 09 08:16:49.591 W clusteroperator/network changed Progressing to True: Deploying: Deployment "openshift-kuryr/kuryr-controller" is not available (awaiting 1 nodes)
Sep 09 08:16:50.766 W ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 reason/Failed Error: error SubPath `/tmp` must not be an absolute path (5 times)
Sep 09 08:16:50.784 I ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 04:16:51.846 - 304s  I test="[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" running
Sep 09 08:16:54.307 I ns/e2e-webhook-1226 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 08:16:54.362 I ns/e2e-webhook-1226 pod/sample-webhook-deployment-7bc8486f8c-mrn7z node/ reason/Created
Sep 09 08:16:54.646 I ns/e2e-webhook-1226 pod/sample-webhook-deployment-7bc8486f8c-mrn7z node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:16:54.662 I ns/e2e-webhook-1226 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-mrn7z
Sep 09 08:16:55.801 W ns/e2e-webhook-1226 pod/sample-webhook-deployment-7bc8486f8c-mrn7z node/ostest-5xqm8-worker-0-rzx47 reason/FailedMount MountVolume.SetUp failed for volume "default-token-j9rz4" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:17:02.816 W ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 reason/Failed Error: error SubPath `/tmp` must not be an absolute path (6 times)
Sep 09 08:17:02.899 I ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:17:03.968 - 59s   W ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:17:03.991 W ns/openshift-kuryr pod/kuryr-cni-7sd9x node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "http://10.196.2.196:8090/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (3 times)
Sep 09 08:17:13.751 I ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:17:13.765 W ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 reason/Failed Error: error SubPath `/tmp` must not be an absolute path (7 times)
Sep 09 08:17:17.477 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a reason/AddedInterface Add eth0 [10.128.189.27/23]
Sep 09 08:17:17.525 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpofc9244bfd-6b10-4cbd-aca8-407baf547bd5 reason/AddedInterface Add eth0 [10.128.133.163/23]
Sep 09 08:17:17.950 I ns/e2e-downward-api-6277 pod/downwardapi-volume-590bc9a1-91b4-4ff1-b7e7-429072c5a2d3 reason/AddedInterface Add eth0 [10.128.146.95/23]
Sep 09 08:17:18.022 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/dels-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:17:18.201 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpofc9244bfd-6b10-4cbd-aca8-407baf547bd5 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpof reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:17:18.301 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/dels-volume-test reason/Created
Sep 09 08:17:18.391 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/dels-volume-test reason/Started
Sep 09 08:17:18.403 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/upds-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:17:18.536 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpofc9244bfd-6b10-4cbd-aca8-407baf547bd5 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpof reason/Created
Sep 09 08:17:18.676 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpofc9244bfd-6b10-4cbd-aca8-407baf547bd5 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpof reason/Started
Sep 09 08:17:18.715 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/upds-volume-test reason/Created
Sep 09 08:17:18.769 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/upds-volume-test reason/Started
Sep 09 08:17:18.807 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/creates-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:17:18.877 I ns/e2e-downward-api-6277 pod/downwardapi-volume-590bc9a1-91b4-4ff1-b7e7-429072c5a2d3 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:17:18.968 W ns/e2e-downward-api-6277 pod/downwardapi-volume-590bc9a1-91b4-4ff1-b7e7-429072c5a2d3 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:17:18.968 W ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:17:19.033 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/creates-volume-test reason/Created
Sep 09 08:17:19.070 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/creates-volume-test reason/Started
Sep 09 08:17:19.171 I ns/e2e-downward-api-6277 pod/downwardapi-volume-590bc9a1-91b4-4ff1-b7e7-429072c5a2d3 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Created
Sep 09 08:17:19.207 I ns/e2e-downward-api-6277 pod/downwardapi-volume-590bc9a1-91b4-4ff1-b7e7-429072c5a2d3 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Started
Sep 09 08:17:19.252 I ns/e2e-downward-api-6277 pod/downwardapi-volume-590bc9a1-91b4-4ff1-b7e7-429072c5a2d3 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Ready
Sep 09 08:17:19.260 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpofc9244bfd-6b10-4cbd-aca8-407baf547bd5 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpof reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:17:19.382 E ns/e2e-container-runtime-2767 pod/terminate-cmd-rpofc9244bfd-6b10-4cbd-aca8-407baf547bd5 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpof container exited with code 1 (Error): 
Sep 09 08:17:19.519 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpofc9244bfd-6b10-4cbd-aca8-407baf547bd5 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpof reason/Created
Sep 09 08:17:19.633 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpofc9244bfd-6b10-4cbd-aca8-407baf547bd5 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpof reason/Started
Sep 09 08:17:20.108 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/upds-volume-test reason/Ready
Sep 09 08:17:20.108 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/dels-volume-test reason/Ready
Sep 09 08:17:20.108 I ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/creates-volume-test reason/Ready
Sep 09 08:17:20.496 W ns/e2e-container-runtime-2767 pod/terminate-cmd-rpofc9244bfd-6b10-4cbd-aca8-407baf547bd5 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpof reason/Restarted
Sep 09 08:17:21.323 W ns/e2e-container-runtime-2767 pod/terminate-cmd-rpofc9244bfd-6b10-4cbd-aca8-407baf547bd5 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:17:22.375 W ns/e2e-downward-api-6277 pod/downwardapi-volume-590bc9a1-91b4-4ff1-b7e7-429072c5a2d3 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:17:26.724 I ns/e2e-kubectl-4019 pod/agnhost-primary-qmhtv node/ reason/Created
Sep 09 08:17:26.820 I ns/e2e-kubectl-4019 pod/agnhost-primary-qmhtv node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:17:26.944 I ns/e2e-kubectl-4019 replicationcontroller/agnhost-primary reason/SuccessfulCreate Created pod: agnhost-primary-qmhtv
Sep 09 08:17:27.763 W ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 reason/Failed Error: error SubPath `/tmp` must not be an absolute path (8 times)
Sep 09 08:17:27.791 I ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:17:33.968 - 240s  W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:17:35.179 W ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:17:37.053 W ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:17:37.053 W ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/dels-volume-test reason/NotReady
Sep 09 08:17:37.053 W ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/upds-volume-test reason/NotReady
Sep 09 08:17:37.053 W ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 container/creates-volume-test reason/NotReady
Sep 09 08:17:41.764 I ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:17:41.808 W ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 reason/Failed Error: error SubPath `/tmp` must not be an absolute path (9 times)
Sep 09 08:17:48.968 - 14s   W ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:17:48.968 - 44s   W ns/e2e-webhook-6747 pod/to-be-attached-pod node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:17:48.968 - 224s  W ns/e2e-var-expansion-5559 pod/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:17:56.761 W ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 reason/Failed Error: error SubPath `/tmp` must not be an absolute path (10 times)
Sep 09 08:17:56.778 I ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:18:00.043 W ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:18:03.968 - 224s  W ns/e2e-webhook-1226 pod/sample-webhook-deployment-7bc8486f8c-mrn7z node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:18:05.458 W ns/e2e-var-expansion-3598 pod/var-expansion-0c8d57e6-fd3a-4824-a9dd-c3e4718523ff node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 04:18:06.136 - 612s  I test="[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" running
Sep 09 08:18:07.388 I ns/e2e-statefulset-4657 pod/ss2-0 node/ reason/Created
Sep 09 08:18:07.428 I ns/e2e-statefulset-4657 statefulset/ss2 reason/SuccessfulCreate create Pod ss2-0 in StatefulSet ss2 successful
Sep 09 08:18:07.457 I ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:18:12.150 W ns/e2e-secrets-6761 pod/pod-secrets-8537a171-a1dd-40e0-9177-bc03de3e416a node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:18:18.273 W ns/e2e-services-7427 pod/execpod-affinityd4ssh node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:18:18.313 I ns/e2e-services-7427 pod/execpod-affinityd4ssh node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/Killing
Sep 09 08:18:18.520 I ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Ready
Sep 09 08:18:18.756 W ns/e2e-services-7427 pod/affinity-clusterip-p85mp node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 1s
Sep 09 08:18:18.763 W ns/e2e-services-7427 pod/affinity-clusterip-w4qwk node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 1s
Sep 09 08:18:18.785 W ns/e2e-services-7427 pod/affinity-clusterip-zf6sn node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 1s
Sep 09 08:18:18.791 I ns/e2e-services-7427 pod/affinity-clusterip-p85mp node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip reason/Killing
Sep 09 08:18:18.823 I ns/e2e-services-7427 pod/affinity-clusterip-w4qwk node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip reason/Killing
Sep 09 08:18:18.825 I ns/e2e-services-7427 pod/affinity-clusterip-zf6sn node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip reason/Killing
Sep 09 08:18:19.140 W clusteroperator/network changed Progressing to False
Sep 09 08:18:19.568 E ns/e2e-services-7427 pod/execpod-affinityd4ssh node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause container exited with code 2 (Error): 
Sep 09 08:18:19.794 W ns/e2e-services-7427 pod/execpod-affinityd4ssh node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:18:22.334 W ns/e2e-services-7427 pod/affinity-clusterip-zf6sn node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:18:22.334 W ns/e2e-services-7427 pod/affinity-clusterip-zf6sn node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip reason/NotReady
Sep 09 08:18:22.579 W ns/e2e-services-7427 pod/affinity-clusterip-p85mp node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:18:22.579 W ns/e2e-services-7427 pod/affinity-clusterip-p85mp node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip reason/NotReady
Sep 09 08:18:22.720 W ns/e2e-services-7427 pod/affinity-clusterip-w4qwk node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:18:22.720 W ns/e2e-services-7427 pod/affinity-clusterip-w4qwk node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip reason/NotReady
Sep 09 08:18:23.302 W ns/e2e-services-7427 pod/affinity-clusterip-zf6sn node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:18:23.600 W ns/e2e-services-7427 pod/affinity-clusterip-p85mp node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:18:32.398 W ns/e2e-services-7427 pod/affinity-clusterip-w4qwk node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 04:18:32.668 I test="[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" failed
Sep 09 08:18:33.863 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-7p7fz node/ reason/Created
Sep 09 08:18:33.937 I ns/e2e-gc-7500 replicationcontroller/simpletest-rc-to-be-deleted reason/SuccessfulCreate Created pod: simpletest-rc-to-be-deleted-7p7fz
Sep 09 08:18:33.948 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-wbvjs node/ reason/Created
Sep 09 08:18:33.968 W ns/e2e-kubectl-4019 pod/agnhost-primary-qmhtv node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:18:33.992 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-7p7fz node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:18:34.014 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-fz2qh node/ reason/Created
Sep 09 08:18:34.021 I ns/e2e-gc-7500 replicationcontroller/simpletest-rc-to-be-deleted reason/SuccessfulCreate Created pod: simpletest-rc-to-be-deleted-wbvjs
Sep 09 08:18:34.034 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-wbvjs node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:18:34.090 I ns/e2e-gc-7500 replicationcontroller/simpletest-rc-to-be-deleted reason/SuccessfulCreate Created pod: simpletest-rc-to-be-deleted-fz2qh
Sep 09 08:18:34.123 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-fz2qh node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:18:34.123 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-zwk2r node/ reason/Created
Sep 09 08:18:34.137 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-d7p67 node/ reason/Created
Sep 09 08:18:34.138 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-5fc4j node/ reason/Created
Sep 09 08:18:34.153 I ns/e2e-gc-7500 replicationcontroller/simpletest-rc-to-be-deleted reason/SuccessfulCreate Created pod: simpletest-rc-to-be-deleted-zwk2r
Sep 09 08:18:34.168 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-rgcdw node/ reason/Created
Sep 09 08:18:34.222 I ns/e2e-gc-7500 replicationcontroller/simpletest-rc-to-be-deleted reason/SuccessfulCreate Created pod: simpletest-rc-to-be-deleted-5fc4j
Sep 09 08:18:34.268 I ns/e2e-gc-7500 replicationcontroller/simpletest-rc-to-be-deleted reason/SuccessfulCreate Created pod: simpletest-rc-to-be-deleted-d7p67
Sep 09 08:18:34.290 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-zcp28 node/ reason/Created
Sep 09 08:18:34.306 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-cptm4 node/ reason/Created
Sep 09 08:18:34.306 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-zwk2r node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:18:34.312 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-gjc95 node/ reason/Created
Sep 09 08:18:34.321 I ns/e2e-gc-7500 replicationcontroller/simpletest-rc-to-be-deleted reason/SuccessfulCreate Created pod: simpletest-rc-to-be-deleted-rgcdw
Sep 09 08:18:34.377 I ns/e2e-gc-7500 replicationcontroller/simpletest-rc-to-be-deleted reason/SuccessfulCreate Created pod: simpletest-rc-to-be-deleted-zcp28
Sep 09 08:18:34.403 I ns/e2e-gc-7500 replicationcontroller/simpletest-rc-to-be-deleted reason/SuccessfulCreate Created pod: simpletest-rc-to-be-deleted-cptm4
Sep 09 08:18:34.442 I ns/e2e-gc-7500 replicationcontroller/simpletest-rc-to-be-deleted reason/SuccessfulCreate (combined from similar events): Created pod: simpletest-rc-to-be-deleted-gjc95
Sep 09 08:18:34.527 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-d7p67 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:18:34.559 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-rgcdw node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:18:34.559 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-gjc95 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:18:34.598 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-5fc4j node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:18:34.633 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-zcp28 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:18:34.680 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-cptm4 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:18:35.275 I ns/e2e-webhook-6747 pod/to-be-attached-pod node/ostest-5xqm8-worker-0-rzx47 container/container1 reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:18:35.669 I ns/e2e-webhook-6747 pod/to-be-attached-pod node/ostest-5xqm8-worker-0-rzx47 container/container1 reason/Created
Sep 09 08:18:35.766 I ns/e2e-webhook-6747 pod/to-be-attached-pod node/ostest-5xqm8-worker-0-rzx47 container/container1 reason/Started
Sep 09 08:18:36.310 W ns/e2e-downward-api-6277 pod/downwardapi-volume-590bc9a1-91b4-4ff1-b7e7-429072c5a2d3 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:18:36.334 I ns/e2e-webhook-6747 pod/to-be-attached-pod node/ostest-5xqm8-worker-0-rzx47 container/container1 reason/Ready
Sep 09 08:18:36.835 W ns/e2e-container-runtime-2767 pod/terminate-cmd-rpofc9244bfd-6b10-4cbd-aca8-407baf547bd5 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:18:37.953 I ns/e2e-downward-api-5990 pod/downward-api-3ac64291-506a-4dc6-9d17-c56d308b143d node/ reason/Created
Sep 09 08:18:37.995 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpn2ee803ac-3de8-46fe-9277-997e4ca3a724 node/ reason/Created
Sep 09 08:18:38.026 I ns/e2e-downward-api-5990 pod/downward-api-3ac64291-506a-4dc6-9d17-c56d308b143d node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:18:38.091 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpn2ee803ac-3de8-46fe-9277-997e4ca3a724 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:18:39.028 I ns/e2e-kubectl-4019 pod/agnhost-primary-qmhtv reason/AddedInterface Add eth0 [10.128.201.116/23]
Sep 09 08:18:39.237 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-gjc95 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:18:39.256 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-wbvjs node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:18:39.264 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-zcp28 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:18:39.267 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-rgcdw node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:18:39.278 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-zwk2r node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:18:39.848 W ns/e2e-webhook-6747 pod/sample-webhook-deployment-7bc8486f8c-54m6z node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:18:39.968 I ns/e2e-kubectl-4019 pod/agnhost-primary-qmhtv node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-primary reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:18:40.207 I ns/e2e-kubectl-4019 pod/agnhost-primary-qmhtv node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-primary reason/Created
Sep 09 08:18:40.334 I ns/e2e-kubectl-4019 pod/agnhost-primary-qmhtv node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-primary reason/Started
Sep 09 08:18:40.669 I ns/e2e-kubectl-4019 pod/agnhost-primary-qmhtv node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-primary reason/Ready
Sep 09 08:18:41.367 W ns/e2e-webhook-6747 pod/sample-webhook-deployment-7bc8486f8c-54m6z node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:18:41.367 W ns/e2e-webhook-6747 pod/sample-webhook-deployment-7bc8486f8c-54m6z node/ostest-5xqm8-worker-0-rzx47 container/sample-webhook reason/NotReady
Sep 09 08:18:41.517 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ reason/Created
Sep 09 08:18:41.602 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:18:41.681 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(16d02ebf056a33760686eceed4f69f28e6d33644015053b1bce2aa76a8403339): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:18:44.320 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpn2ee803ac-3de8-46fe-9277-997e4ca3a724 reason/AddedInterface Add eth0 [10.128.133.163/23]
Sep 09 08:18:45.041 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpn2ee803ac-3de8-46fe-9277-997e4ca3a724 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpn reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:18:45.324 W ns/e2e-webhook-6747 pod/sample-webhook-deployment-7bc8486f8c-54m6z node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:18:45.350 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpn2ee803ac-3de8-46fe-9277-997e4ca3a724 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpn reason/Created
Sep 09 08:18:45.420 I ns/e2e-container-runtime-2767 pod/terminate-cmd-rpn2ee803ac-3de8-46fe-9277-997e4ca3a724 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpn reason/Started
Sep 09 08:18:45.647 E ns/e2e-container-runtime-2767 pod/terminate-cmd-rpn2ee803ac-3de8-46fe-9277-997e4ca3a724 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpn init container exited with code 1 (Error): 
Sep 09 08:18:45.647 E ns/e2e-container-runtime-2767 pod/terminate-cmd-rpn2ee803ac-3de8-46fe-9277-997e4ca3a724 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (): 
Sep 09 08:18:45.647 E ns/e2e-container-runtime-2767 pod/terminate-cmd-rpn2ee803ac-3de8-46fe-9277-997e4ca3a724 node/ostest-5xqm8-worker-0-cbbx9 container/terminate-cmd-rpn container exited with code 1 (Error): 
Sep 09 08:18:46.333 W ns/e2e-container-runtime-2767 pod/terminate-cmd-rpn2ee803ac-3de8-46fe-9277-997e4ca3a724 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:18:47.280 I ns/e2e-container-lifecycle-hook-3600 pod/pod-handle-http-request node/ reason/Created
Sep 09 08:18:47.370 I ns/e2e-container-lifecycle-hook-3600 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:18:49.866 W ns/e2e-container-runtime-2767 pod/terminate-cmd-rpn2ee803ac-3de8-46fe-9277-997e4ca3a724 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:18:52.391 I ns/e2e-svcaccounts-9212 pod/pod-service-account-d652eaf4-b7c1-41a9-94fc-a4564f8f3365 node/ reason/Created
Sep 09 08:18:52.501 I ns/e2e-svcaccounts-9212 pod/pod-service-account-d652eaf4-b7c1-41a9-94fc-a4564f8f3365 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:18:54.041 W ns/e2e-webhook-6747 pod/to-be-attached-pod node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:18:55.431 W ns/e2e-webhook-6747 pod/to-be-attached-pod node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:18:55.431 W ns/e2e-webhook-6747 pod/to-be-attached-pod node/ostest-5xqm8-worker-0-rzx47 container/container1 reason/NotReady
Sep 09 08:19:00.967 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-5fc4j reason/AddedInterface Add eth0 [10.128.122.79/23]
Sep 09 08:19:01.056 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-7p7fz reason/AddedInterface Add eth0 [10.128.123.49/23]
Sep 09 08:19:01.303 W ns/e2e-kubectl-4019 pod/agnhost-primary-qmhtv node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:19:01.790 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-7p7fz node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Pulled image/docker.io/library/nginx:1.14-alpine
Sep 09 08:19:01.864 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-5fc4j node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Pulled image/docker.io/library/nginx:1.14-alpine
Sep 09 08:19:02.088 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-7p7fz node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Created
Sep 09 08:19:02.196 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-7p7fz node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Started
Sep 09 08:19:02.316 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-5fc4j node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Created
Sep 09 08:19:02.393 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-5fc4j node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Started
Sep 09 08:19:02.746 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-7p7fz node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Ready
Sep 09 08:19:02.814 W ns/e2e-kubectl-4019 pod/agnhost-primary-qmhtv node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:19:02.814 W ns/e2e-kubectl-4019 pod/agnhost-primary-qmhtv node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-primary reason/NotReady
Sep 09 08:19:02.931 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-5fc4j node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Ready
Sep 09 08:19:03.968 W ns/e2e-kubectl-4019 pod/agnhost-primary-qmhtv node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:19:03.968 W ns/e2e-webhook-6747 pod/to-be-attached-pod node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:19:04.480 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-zcp28 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:19:04.920 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-fz2qh reason/AddedInterface Add eth0 [10.128.122.255/23]
Sep 09 08:19:05.556 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-d7p67 reason/AddedInterface Add eth0 [10.128.123.111/23]
Sep 09 08:19:05.828 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-fz2qh node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Pulled image/docker.io/library/nginx:1.14-alpine
Sep 09 08:19:06.279 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-d7p67 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Pulled image/docker.io/library/nginx:1.14-alpine
Sep 09 08:19:06.284 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-fz2qh node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Created
Sep 09 08:19:06.390 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-fz2qh node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Started
Sep 09 08:19:06.555 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-d7p67 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Created
Sep 09 08:19:06.619 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-d7p67 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Started
Sep 09 08:19:06.789 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-d7p67 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Ready
Sep 09 08:19:06.882 W ns/e2e-kubectl-4019 pod/agnhost-primary-qmhtv node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:19:06.995 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-fz2qh node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Ready
Sep 09 08:19:07.304 W ns/e2e-webhook-6747 pod/to-be-attached-pod node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:19:07.745 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(2716f8b0dfa581797798fafa468c8771504ea7ab4ba5155af3c4c2db3e619f5c): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:19:08.033 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-cptm4 reason/AddedInterface Add eth0 [10.128.122.130/23]
Sep 09 08:19:10.324 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-cptm4 node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Pulled image/docker.io/library/nginx:1.14-alpine
Sep 09 08:19:10.633 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-cptm4 node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Created
Sep 09 08:19:10.901 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-cptm4 node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Started
Sep 09 08:19:11.007 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-cptm4 node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Ready
Sep 09 08:19:12.902 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-gjc95 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:19:13.655 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 reason/AddedInterface Add eth0 [10.128.203.89/23]
Sep 09 08:19:14.297 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:19:14.621 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Created
Sep 09 08:19:14.625 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Started
Sep 09 08:19:15.557 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Ready
Sep 09 08:19:18.183 I ns/e2e-downward-api-5990 pod/downward-api-3ac64291-506a-4dc6-9d17-c56d308b143d reason/AddedInterface Add eth0 [10.128.189.162/23]
Sep 09 08:19:18.940 I ns/e2e-downward-api-5990 pod/downward-api-3ac64291-506a-4dc6-9d17-c56d308b143d node/ostest-5xqm8-worker-0-twrlr container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:19:18.968 - 525s  W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:19:19.244 I ns/e2e-downward-api-5990 pod/downward-api-3ac64291-506a-4dc6-9d17-c56d308b143d node/ostest-5xqm8-worker-0-twrlr container/dapi-container reason/Created
Sep 09 08:19:19.404 I ns/e2e-downward-api-5990 pod/downward-api-3ac64291-506a-4dc6-9d17-c56d308b143d node/ostest-5xqm8-worker-0-twrlr container/dapi-container reason/Started
Sep 09 08:19:20.443 W ns/e2e-downward-api-5990 pod/downward-api-3ac64291-506a-4dc6-9d17-c56d308b143d node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:19:20.590 I ns/e2e-container-lifecycle-hook-3600 pod/pod-handle-http-request reason/AddedInterface Add eth0 [10.128.212.123/23]
Sep 09 08:19:21.397 I ns/e2e-container-lifecycle-hook-3600 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 container/pod-handle-http-request reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:19:21.670 I ns/e2e-container-lifecycle-hook-3600 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 container/pod-handle-http-request reason/Created
Sep 09 08:19:21.710 I ns/e2e-container-lifecycle-hook-3600 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 container/pod-handle-http-request reason/Started
Sep 09 08:19:21.848 I ns/e2e-container-lifecycle-hook-3600 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 container/pod-handle-http-request reason/Ready
Sep 09 08:19:23.392 I ns/e2e-container-lifecycle-hook-3600 pod/pod-with-poststart-http-hook node/ reason/Created
Sep 09 08:19:23.439 I ns/e2e-container-lifecycle-hook-3600 pod/pod-with-poststart-http-hook node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:19:23.479 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-wbvjs node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:19:23.552 I ns/e2e-svcaccounts-9212 pod/pod-service-account-d652eaf4-b7c1-41a9-94fc-a4564f8f3365 reason/AddedInterface Add eth0 [10.128.215.22/23]
Sep 09 08:19:24.065 W ns/e2e-downward-api-5990 pod/downward-api-3ac64291-506a-4dc6-9d17-c56d308b143d node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:19:24.259 I ns/e2e-svcaccounts-9212 pod/pod-service-account-d652eaf4-b7c1-41a9-94fc-a4564f8f3365 node/ostest-5xqm8-worker-0-rzx47 container/test reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:19:24.428 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-zwk2r node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:19:24.568 I ns/e2e-svcaccounts-9212 pod/pod-service-account-d652eaf4-b7c1-41a9-94fc-a4564f8f3365 node/ostest-5xqm8-worker-0-rzx47 container/test reason/Created
Sep 09 08:19:24.642 I ns/e2e-svcaccounts-9212 pod/pod-service-account-d652eaf4-b7c1-41a9-94fc-a4564f8f3365 node/ostest-5xqm8-worker-0-rzx47 container/test reason/Started
Sep 09 08:19:24.942 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-rgcdw node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:19:25.574 I ns/e2e-svcaccounts-9212 pod/pod-service-account-d652eaf4-b7c1-41a9-94fc-a4564f8f3365 node/ostest-5xqm8-worker-0-rzx47 container/test reason/Ready
Sep 09 08:19:26.249 W ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500
Sep 09 08:19:26.280 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Killing
Sep 09 08:19:26.489 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:19:26.832 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Created
Sep 09 08:19:26.893 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Started
Sep 09 08:19:27.095 I ns/e2e-emptydir-201 pod/pod-19c29278-dabf-431c-8ad1-18cb8a3252f9 node/ reason/Created
Sep 09 08:19:27.137 I ns/e2e-emptydir-201 pod/pod-19c29278-dabf-431c-8ad1-18cb8a3252f9 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:19:27.547 W ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Restarted
Sep 09 08:19:28.948 I ns/e2e-security-context-test-4842 pod/busybox-user-65534-b46efcd2-9bd3-42c7-bcfc-81466f1830f7 node/ reason/Created
Sep 09 08:19:29.000 I ns/e2e-security-context-test-4842 pod/busybox-user-65534-b46efcd2-9bd3-42c7-bcfc-81466f1830f7 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:19:29.024 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-5fc4j node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:19:29.040 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-5fc4j node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Killing
Sep 09 08:19:29.047 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-7p7fz node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:19:29.071 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-7p7fz node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Killing
Sep 09 08:19:29.072 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-cptm4 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:19:29.085 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-d7p67 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:19:29.100 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-cptm4 node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Killing
Sep 09 08:19:29.125 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-d7p67 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Killing
Sep 09 08:19:29.126 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-fz2qh node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:19:29.140 I ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-fz2qh node/ostest-5xqm8-worker-0-twrlr container/nginx reason/Killing
Sep 09 08:19:30.912 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(e7c0aab95eb4a5eaca32be13f541b9d62eac6ae918a1eb8f6656cae87c3efd69): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:19:31.203 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-d7p67 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:19:31.203 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-d7p67 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/NotReady
Sep 09 08:19:31.341 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-7p7fz node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:19:31.341 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-7p7fz node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/NotReady
Sep 09 08:19:31.731 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-fz2qh node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:19:31.731 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-fz2qh node/ostest-5xqm8-worker-0-twrlr container/nginx reason/NotReady
Sep 09 08:19:32.206 I ns/e2e-projected-7197 pod/pod-projected-configmaps-1292fdb1-09e8-4c37-a880-c6aaca2bfcb1 node/ reason/Created
Sep 09 08:19:32.238 I ns/e2e-projected-7197 pod/pod-projected-configmaps-1292fdb1-09e8-4c37-a880-c6aaca2bfcb1 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:19:33.968 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-fz2qh node/ostest-5xqm8-worker-0-twrlr pod has been pending longer than a minute
Sep 09 08:19:33.968 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-7p7fz node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:19:39.557 I ns/e2e-container-lifecycle-hook-3600 pod/pod-with-poststart-http-hook reason/AddedInterface Add eth0 [10.128.212.129/23]
Sep 09 08:19:40.179 I ns/e2e-container-lifecycle-hook-3600 pod/pod-with-poststart-http-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-poststart-http-hook reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:19:40.413 I ns/e2e-container-lifecycle-hook-3600 pod/pod-with-poststart-http-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-poststart-http-hook reason/Created
Sep 09 08:19:40.474 I ns/e2e-container-lifecycle-hook-3600 pod/pod-with-poststart-http-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-poststart-http-hook reason/Started
Sep 09 08:19:41.036 I ns/e2e-container-lifecycle-hook-3600 pod/pod-with-poststart-http-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-poststart-http-hook reason/Ready
Sep 09 08:19:41.519 W ns/e2e-container-lifecycle-hook-3600 pod/pod-with-poststart-http-hook node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 15s
Sep 09 08:19:42.195 W ns/e2e-svcaccounts-9212 pod/pod-service-account-d652eaf4-b7c1-41a9-94fc-a4564f8f3365 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:19:42.988 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-d7p67 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:19:43.004 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-fz2qh node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:19:43.008 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-7p7fz node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:19:43.029 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-5fc4j node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:19:43.050 I ns/e2e-container-lifecycle-hook-3600 pod/pod-with-poststart-http-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-poststart-http-hook reason/Killing
Sep 09 08:19:45.463 W ns/e2e-gc-7500 pod/simpletest-rc-to-be-deleted-cptm4 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:19:45.748 W ns/e2e-svcaccounts-9212 pod/pod-service-account-d652eaf4-b7c1-41a9-94fc-a4564f8f3365 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:19:45.748 W ns/e2e-svcaccounts-9212 pod/pod-service-account-d652eaf4-b7c1-41a9-94fc-a4564f8f3365 node/ostest-5xqm8-worker-0-rzx47 container/test reason/NotReady
Sep 09 08:19:45.969 W ns/e2e-svcaccounts-9212 pod/pod-service-account-d652eaf4-b7c1-41a9-94fc-a4564f8f3365 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:19:46.244 W ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (2 times)
Sep 09 08:19:46.302 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Killing
Sep 09 08:19:46.354 W ns/e2e-var-expansion-5559 pod/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326_e2e-var-expansion-5559_f3fcc96e-8ae6-4607-aa2f-60347739a1d6_0(2962a46174bdaf39ca0fd04cb4155584f2d2fac671198846c47435314bfcc90b): netplugin failed: "2020/09/09 08:16:49 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-var-expansion-5559;K8S_POD_NAME=var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326;K8S_POD_INFRA_CONTAINER_ID=2962a46174bdaf39ca0fd04cb4155584f2d2fac671198846c47435314bfcc90b, CNI_NETNS=/var/run/netns/a751b908-bb73-49b7-b353-d952ed564817).\n"
Sep 09 08:19:46.438 W ns/e2e-webhook-1226 pod/sample-webhook-deployment-7bc8486f8c-mrn7z node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-mrn7z_e2e-webhook-1226_9fc488ac-b549-46b8-96c7-f559feac9331_0(ad04963ce8214ad662085788e7b2a00c346346598f3a3602541282e4f1b64e5b): netplugin failed: "2020/09/09 08:16:56 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-webhook-1226;K8S_POD_NAME=sample-webhook-deployment-7bc8486f8c-mrn7z;K8S_POD_INFRA_CONTAINER_ID=ad04963ce8214ad662085788e7b2a00c346346598f3a3602541282e4f1b64e5b, CNI_NETNS=/var/run/netns/8d5a9607-d238-4096-9204-ed13e41786bf).\n"
Sep 09 08:19:46.571 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:19:46.765 W ns/e2e-container-lifecycle-hook-3600 pod/pod-with-poststart-http-hook node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:19:46.829 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Created
Sep 09 08:19:46.850 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-5052_2b04a42f-f104-44ea-a3e7-e2e7b3638d26_0(49d9589b05fc6ca2c02ea4e82a0ab988d98148a81515394a61a89a4643afac68): netplugin failed: "2020/09/09 08:16:26 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-statefulset-5052;K8S_POD_NAME=ss2-0;K8S_POD_INFRA_CONTAINER_ID=49d9589b05fc6ca2c02ea4e82a0ab988d98148a81515394a61a89a4643afac68, CNI_NETNS=/var/run/netns/d7bb45fc-53e7-4aca-8b3e-64056ffd6a0d).\n"
Sep 09 08:19:46.897 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Started
Sep 09 08:19:47.669 W ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Restarted
Sep 09 08:19:48.931 I ns/e2e-projected-9953 pod/pod-projected-configmaps-d3b56e23-e0bb-43a1-adb4-d161bee37cae node/ reason/Created
Sep 09 08:19:49.017 I ns/e2e-projected-9953 pod/pod-projected-configmaps-d3b56e23-e0bb-43a1-adb4-d161bee37cae node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:19:52.369 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (34 times)
Sep 09 08:19:52.841 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(5413162bd572ecc081ddec2379f75de756c5e8e4c3c8070cb4498b739bd4f56d): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:19:57.804 I ns/e2e-var-expansion-5183 pod/var-expansion-22325d24-6662-4fd9-aa95-2bfac53855eb node/ reason/Created
Sep 09 08:19:57.863 I ns/e2e-var-expansion-5183 pod/var-expansion-22325d24-6662-4fd9-aa95-2bfac53855eb node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:20:04.124 W ns/e2e-container-lifecycle-hook-3600 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:20:05.163 E ns/e2e-container-lifecycle-hook-3600 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 container/pod-handle-http-request container exited with code 2 (Error): 
Sep 09 08:20:06.266 W ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (3 times)
Sep 09 08:20:06.272 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Killing
Sep 09 08:20:06.497 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:20:06.819 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Created
Sep 09 08:20:06.881 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Started
Sep 09 08:20:07.720 W ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Restarted
Sep 09 08:20:08.349 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-5052_2b04a42f-f104-44ea-a3e7-e2e7b3638d26_0(49895567b0b479ba0868d11c42b94e465f85600203ec0652ec188f8f44e00991): [e2e-statefulset-5052/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:20:08.798 W ns/e2e-webhook-1226 pod/sample-webhook-deployment-7bc8486f8c-mrn7z node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-mrn7z_e2e-webhook-1226_9fc488ac-b549-46b8-96c7-f559feac9331_0(6f96671abd1a8dd6f1904b7e3c129bec385fdf72aa2ffe197da029f9803c6e8b): [e2e-webhook-1226/sample-webhook-deployment-7bc8486f8c-mrn7z:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:20:11.535 W ns/e2e-var-expansion-5559 pod/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326_e2e-var-expansion-5559_f3fcc96e-8ae6-4607-aa2f-60347739a1d6_0(cf824078b832d00910f6feaead28e152adb4864f06c5bac8622440f5eb9664fb): [e2e-var-expansion-5559/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:20:12.177 I ns/e2e-emptydir-201 pod/pod-19c29278-dabf-431c-8ad1-18cb8a3252f9 reason/AddedInterface Add eth0 [10.128.158.156/23]
Sep 09 08:20:12.977 I ns/e2e-emptydir-201 pod/pod-19c29278-dabf-431c-8ad1-18cb8a3252f9 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:20:13.142 I ns/e2e-security-context-test-4842 pod/busybox-user-65534-b46efcd2-9bd3-42c7-bcfc-81466f1830f7 reason/AddedInterface Add eth0 [10.128.139.223/23]
Sep 09 08:20:13.372 I ns/e2e-emptydir-201 pod/pod-19c29278-dabf-431c-8ad1-18cb8a3252f9 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:20:13.525 I ns/e2e-projected-7197 pod/pod-projected-configmaps-1292fdb1-09e8-4c37-a880-c6aaca2bfcb1 reason/AddedInterface Add eth0 [10.128.151.116/23]
Sep 09 08:20:13.981 I ns/e2e-emptydir-201 pod/pod-19c29278-dabf-431c-8ad1-18cb8a3252f9 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:20:14.015 I ns/e2e-security-context-test-4842 pod/busybox-user-65534-b46efcd2-9bd3-42c7-bcfc-81466f1830f7 node/ostest-5xqm8-worker-0-rzx47 container/busybox-user-65534-b46efcd2-9bd3-42c7-bcfc-81466f1830f7 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:20:14.256 I ns/e2e-projected-7197 pod/pod-projected-configmaps-1292fdb1-09e8-4c37-a880-c6aaca2bfcb1 node/ostest-5xqm8-worker-0-rzx47 container/projected-configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:20:14.303 I ns/e2e-security-context-test-4842 pod/busybox-user-65534-b46efcd2-9bd3-42c7-bcfc-81466f1830f7 node/ostest-5xqm8-worker-0-rzx47 container/busybox-user-65534-b46efcd2-9bd3-42c7-bcfc-81466f1830f7 reason/Created
Sep 09 08:20:14.518 I ns/e2e-security-context-test-4842 pod/busybox-user-65534-b46efcd2-9bd3-42c7-bcfc-81466f1830f7 node/ostest-5xqm8-worker-0-rzx47 container/busybox-user-65534-b46efcd2-9bd3-42c7-bcfc-81466f1830f7 reason/Started
Sep 09 08:20:14.578 I ns/e2e-projected-7197 pod/pod-projected-configmaps-1292fdb1-09e8-4c37-a880-c6aaca2bfcb1 node/ostest-5xqm8-worker-0-rzx47 container/projected-configmap-volume-test reason/Created
Sep 09 08:20:14.640 I ns/e2e-projected-7197 pod/pod-projected-configmaps-1292fdb1-09e8-4c37-a880-c6aaca2bfcb1 node/ostest-5xqm8-worker-0-rzx47 container/projected-configmap-volume-test reason/Started
Sep 09 08:20:14.854 I ns/e2e-projected-7197 pod/pod-projected-configmaps-1292fdb1-09e8-4c37-a880-c6aaca2bfcb1 node/ostest-5xqm8-worker-0-rzx47 container/projected-configmap-volume-test reason/Ready
Sep 09 08:20:15.649 W ns/e2e-emptydir-201 pod/pod-19c29278-dabf-431c-8ad1-18cb8a3252f9 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:20:15.735 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(8d6e9e7eb17c5df3a4688342bd71da637edc472805bfcc20be99dcf53b69bc20): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:20:16.542 I ns/e2e-downward-api-4984 pod/downward-api-3598ab4c-9d6d-411c-b6cd-3b53aa07eb38 node/ reason/Created
Sep 09 08:20:16.712 I ns/e2e-downward-api-4984 pod/downward-api-3598ab4c-9d6d-411c-b6cd-3b53aa07eb38 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:20:16.835 W ns/e2e-container-lifecycle-hook-3600 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:20:19.244 W ns/e2e-emptydir-201 pod/pod-19c29278-dabf-431c-8ad1-18cb8a3252f9 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:20:25.070 I ns/e2e-projected-9953 pod/pod-projected-configmaps-d3b56e23-e0bb-43a1-adb4-d161bee37cae reason/AddedInterface Add eth0 [10.128.172.28/23]
Sep 09 08:20:25.738 I ns/e2e-projected-9953 pod/pod-projected-configmaps-d3b56e23-e0bb-43a1-adb4-d161bee37cae node/ostest-5xqm8-worker-0-cbbx9 container/projected-configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:20:26.029 I ns/e2e-projected-9953 pod/pod-projected-configmaps-d3b56e23-e0bb-43a1-adb4-d161bee37cae node/ostest-5xqm8-worker-0-cbbx9 container/projected-configmap-volume-test reason/Created
Sep 09 08:20:26.092 I ns/e2e-projected-9953 pod/pod-projected-configmaps-d3b56e23-e0bb-43a1-adb4-d161bee37cae node/ostest-5xqm8-worker-0-cbbx9 container/projected-configmap-volume-test reason/Started
Sep 09 08:20:26.232 W ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (4 times)
Sep 09 08:20:26.290 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Killing
Sep 09 08:20:26.477 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:20:26.804 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Created
Sep 09 08:20:26.871 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Started
Sep 09 08:20:27.056 W ns/e2e-security-context-test-4842 pod/busybox-user-65534-b46efcd2-9bd3-42c7-bcfc-81466f1830f7 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:20:27.389 W ns/e2e-projected-9953 pod/pod-projected-configmaps-d3b56e23-e0bb-43a1-adb4-d161bee37cae node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:20:27.788 W ns/e2e-security-context-test-4842 pod/busybox-user-65534-b46efcd2-9bd3-42c7-bcfc-81466f1830f7 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:20:27.830 W ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Restarted
Sep 09 08:20:30.501 W ns/e2e-projected-9953 pod/pod-projected-configmaps-d3b56e23-e0bb-43a1-adb4-d161bee37cae node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:20:31.851 W ns/e2e-webhook-1226 pod/sample-webhook-deployment-7bc8486f8c-mrn7z node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-mrn7z_e2e-webhook-1226_9fc488ac-b549-46b8-96c7-f559feac9331_0(45ec66d8cbae190f2a4906b722e79c7eccb06d31bb436195baf9f5f7580ac52d): [e2e-webhook-1226/sample-webhook-deployment-7bc8486f8c-mrn7z:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:20:32.432 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-5052_2b04a42f-f104-44ea-a3e7-e2e7b3638d26_0(38a946c407c59eccf715eab4ab1a71850c01c43acdb86be1f76e5c2f036f9355): [e2e-statefulset-5052/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:20:32.667 I ns/e2e-configmap-883 pod/pod-configmaps-6c6619cf-368f-4c55-86a3-84ce02dfc2c4 node/ reason/Created
Sep 09 08:20:32.731 I ns/e2e-configmap-883 pod/pod-configmaps-6c6619cf-368f-4c55-86a3-84ce02dfc2c4 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:20:33.392 I ns/e2e-emptydir-5015 pod/pod-dd66b42c-806e-4fe2-99d6-7df7d6689df7 node/ reason/Created
Sep 09 08:20:33.460 I ns/e2e-emptydir-5015 pod/pod-dd66b42c-806e-4fe2-99d6-7df7d6689df7 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:20:34.462 W ns/e2e-var-expansion-5559 pod/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326_e2e-var-expansion-5559_f3fcc96e-8ae6-4607-aa2f-60347739a1d6_0(1bda5a269c5d60c7682677e1e9f40374f3f3db9fb29189e3ec95fe1dda360e9f): [e2e-var-expansion-5559/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:20:36.343 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (115 times)
Sep 09 08:20:38.520 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(31cf8915ed6ee7c5442700981f007c88956880636ca89fd3bcd3d7033faa8b7f): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:20:46.015 I ns/e2e-var-expansion-5183 pod/var-expansion-22325d24-6662-4fd9-aa95-2bfac53855eb reason/AddedInterface Add eth0 [10.128.154.227/23]
Sep 09 08:20:46.267 W ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (5 times)
Sep 09 08:20:46.319 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Killing
Sep 09 08:20:46.886 I ns/e2e-var-expansion-5183 pod/var-expansion-22325d24-6662-4fd9-aa95-2bfac53855eb node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:20:46.935 W ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/NotReady
Sep 09 08:20:47.174 I ns/e2e-var-expansion-5183 pod/var-expansion-22325d24-6662-4fd9-aa95-2bfac53855eb node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Created
Sep 09 08:20:47.201 W ns/openshift-marketplace pod/certified-operators-f47gq node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:20:47.261 I ns/e2e-var-expansion-5183 pod/var-expansion-22325d24-6662-4fd9-aa95-2bfac53855eb node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Started
Sep 09 08:20:47.343 I ns/openshift-marketplace pod/certified-operators-f47gq node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:20:47.396 I ns/openshift-marketplace pod/certified-operators-vf5w4 node/ reason/Created
Sep 09 08:20:47.424 W ns/openshift-marketplace pod/community-operators-kbjpm node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:20:47.489 I ns/openshift-marketplace pod/community-operators-kbjpm node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:20:47.668 W ns/openshift-marketplace pod/redhat-marketplace-mwrsg node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:20:47.684 W ns/openshift-marketplace pod/redhat-operators-6w75w node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:20:47.692 I ns/openshift-marketplace pod/certified-operators-vf5w4 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:20:47.720 I ns/openshift-marketplace pod/redhat-marketplace-mwrsg node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:20:47.758 I ns/openshift-marketplace pod/redhat-operators-6w75w node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:20:47.758 I ns/openshift-marketplace pod/redhat-marketplace-ln9xv node/ reason/Created
Sep 09 08:20:47.789 I ns/openshift-marketplace pod/redhat-operators-r877k node/ reason/Created
Sep 09 08:20:47.933 I ns/openshift-marketplace pod/community-operators-4fm99 node/ reason/Created
Sep 09 08:20:47.992 I ns/openshift-marketplace pod/redhat-operators-r877k node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:20:48.025 I ns/openshift-marketplace pod/redhat-marketplace-ln9xv node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:20:48.087 I ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:20:48.636 W ns/e2e-var-expansion-5183 pod/var-expansion-22325d24-6662-4fd9-aa95-2bfac53855eb node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:20:48.682 W ns/openshift-marketplace pod/community-operators-kbjpm node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe errored: rpc error: code = NotFound desc = could not find container "15bf06ab7bb4603b388e8e8c4e330b9386f918831191c9edd23f682ce2b82908": container with ID starting with 15bf06ab7bb4603b388e8e8c4e330b9386f918831191c9edd23f682ce2b82908 not found: ID does not exist
Sep 09 08:20:49.462 W ns/openshift-marketplace pod/redhat-operators-6w75w node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:20:49.462 W ns/openshift-marketplace pod/redhat-operators-6w75w node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:20:49.584 W ns/openshift-marketplace pod/certified-operators-f47gq node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:20:49.584 W ns/openshift-marketplace pod/certified-operators-f47gq node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:20:49.676 W ns/openshift-marketplace pod/community-operators-kbjpm node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:20:49.676 W ns/openshift-marketplace pod/community-operators-kbjpm node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:20:49.802 W ns/openshift-marketplace pod/redhat-marketplace-mwrsg node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:20:49.802 W ns/openshift-marketplace pod/redhat-marketplace-mwrsg node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:20:50.386 W ns/openshift-marketplace pod/certified-operators-f47gq node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:20:51.324 W ns/openshift-marketplace pod/community-operators-kbjpm node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:20:51.774 I ns/e2e-downward-api-4984 pod/downward-api-3598ab4c-9d6d-411c-b6cd-3b53aa07eb38 reason/AddedInterface Add eth0 [10.128.149.77/23]
Sep 09 08:20:52.043 W ns/openshift-marketplace pod/redhat-operators-6w75w node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:20:52.110 W ns/openshift-marketplace pod/redhat-marketplace-mwrsg node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:20:52.119 W ns/e2e-var-expansion-5183 pod/var-expansion-22325d24-6662-4fd9-aa95-2bfac53855eb node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:20:52.373 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (40 times)
Sep 09 08:20:52.496 I ns/e2e-downward-api-4984 pod/downward-api-3598ab4c-9d6d-411c-b6cd-3b53aa07eb38 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:20:52.800 I ns/e2e-downward-api-4984 pod/downward-api-3598ab4c-9d6d-411c-b6cd-3b53aa07eb38 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Created
Sep 09 08:20:52.882 I ns/e2e-downward-api-4984 pod/downward-api-3598ab4c-9d6d-411c-b6cd-3b53aa07eb38 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Started
Sep 09 08:20:53.184 I ns/openshift-marketplace pod/certified-operators-vf5w4 reason/AddedInterface Add eth0 [10.128.2.120/23]
Sep 09 08:20:53.574 I ns/openshift-marketplace pod/redhat-operators-r877k reason/AddedInterface Add eth0 [10.128.2.178/23]
Sep 09 08:20:53.872 I ns/openshift-marketplace pod/redhat-marketplace-ln9xv reason/AddedInterface Add eth0 [10.128.3.89/23]
Sep 09 08:20:53.890 W ns/e2e-webhook-1226 pod/sample-webhook-deployment-7bc8486f8c-mrn7z node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-mrn7z_e2e-webhook-1226_9fc488ac-b549-46b8-96c7-f559feac9331_0(28b91d6decb6b7d98f51c757049335fe4c9ead8c238bc148268fb3ff367b1d6a): [e2e-webhook-1226/sample-webhook-deployment-7bc8486f8c-mrn7z:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:20:53.979 I ns/openshift-marketplace pod/community-operators-4fm99 reason/AddedInterface Add eth0 [10.128.3.205/23]
Sep 09 08:20:53.980 I ns/openshift-marketplace pod/certified-operators-vf5w4 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/certified-operator-index:v4.6
Sep 09 08:20:54.184 I ns/e2e-emptydir-4964 pod/pod-1a3436d7-d449-4bbe-928c-57d5f004eb0a node/ reason/Created
Sep 09 08:20:54.296 I ns/e2e-emptydir-4964 pod/pod-1a3436d7-d449-4bbe-928c-57d5f004eb0a node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:20:54.338 I ns/openshift-marketplace pod/redhat-operators-r877k node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-operator-index:v4.6
Sep 09 08:20:54.706 I ns/openshift-marketplace pod/redhat-marketplace-ln9xv node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-marketplace-index:v4.6
Sep 09 08:20:54.891 I ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/community-operator-index:latest
Sep 09 08:20:54.960 W ns/e2e-downward-api-4984 pod/downward-api-3598ab4c-9d6d-411c-b6cd-3b53aa07eb38 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:20:56.727 I ns/openshift-marketplace pod/redhat-operators-r877k node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/redhat-operator-index:v4.6
Sep 09 08:20:57.044 I ns/openshift-marketplace pod/redhat-operators-r877k node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:20:57.072 I ns/openshift-marketplace pod/certified-operators-vf5w4 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/certified-operator-index:v4.6
Sep 09 08:20:57.117 I ns/openshift-marketplace pod/redhat-operators-r877k node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:20:57.375 I ns/openshift-marketplace pod/certified-operators-vf5w4 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:20:57.438 I ns/openshift-marketplace pod/certified-operators-vf5w4 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:20:57.529 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-5052_2b04a42f-f104-44ea-a3e7-e2e7b3638d26_0(237ac70f7df084005bcf1f87ddefb6dffa1f3213bea57872a13223b317887884): [e2e-statefulset-5052/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:20:57.668 I ns/openshift-marketplace pod/redhat-marketplace-ln9xv node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/redhat-marketplace-index:v4.6
Sep 09 08:20:57.929 I ns/openshift-marketplace pod/redhat-marketplace-ln9xv node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:20:57.989 I ns/openshift-marketplace pod/redhat-marketplace-ln9xv node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:20:58.269 W ns/e2e-downward-api-4984 pod/downward-api-3598ab4c-9d6d-411c-b6cd-3b53aa07eb38 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:20:59.622 W ns/e2e-var-expansion-5559 pod/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326_e2e-var-expansion-5559_f3fcc96e-8ae6-4607-aa2f-60347739a1d6_0(75480d7fe68905e2bec439c1fbc06c4e64a27bb8776e5d6925341b92da3c15bc): [e2e-var-expansion-5559/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:21:00.685 I ns/e2e-job-4997 pod/fail-once-local-fn8r6 node/ reason/Created
Sep 09 08:21:00.749 I ns/e2e-job-4997 pod/fail-once-local-fn8r6 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:21:00.758 I ns/e2e-job-4997 job/fail-once-local reason/SuccessfulCreate Created pod: fail-once-local-fn8r6
Sep 09 08:21:00.791 I ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ reason/Created
Sep 09 08:21:00.826 I ns/e2e-job-4997 job/fail-once-local reason/SuccessfulCreate Created pod: fail-once-local-wxdqn
Sep 09 08:21:00.955 I ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:21:03.486 I ns/e2e-emptydir-5015 pod/pod-dd66b42c-806e-4fe2-99d6-7df7d6689df7 reason/AddedInterface Add eth0 [10.128.184.2/23]
Sep 09 08:21:03.610 I ns/openshift-marketplace pod/redhat-operators-r877k node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:21:04.036 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(708300cfd36258474aa2ace00b9068b6e46cf031e81d0b295359c547dcc839d2): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:21:04.221 I ns/e2e-emptydir-5015 pod/pod-dd66b42c-806e-4fe2-99d6-7df7d6689df7 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:21:04.482 I ns/e2e-emptydir-5015 pod/pod-dd66b42c-806e-4fe2-99d6-7df7d6689df7 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:21:04.851 W ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed Failed to pull image "registry.redhat.io/redhat/community-operator-index:latest": rpc error: code = Unknown desc = Error reading manifest latest in registry.redhat.io/redhat/community-operator-index: received unexpected HTTP status: 503 Service Unavailable
Sep 09 08:21:04.868 W ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed Error: ErrImagePull
Sep 09 08:21:05.068 I ns/e2e-emptydir-5015 pod/pod-dd66b42c-806e-4fe2-99d6-7df7d6689df7 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:21:05.584 I ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 reason/BackOff Back-off pulling image "registry.redhat.io/redhat/community-operator-index:latest"
Sep 09 08:21:05.743 W ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed Error: ImagePullBackOff
Sep 09 08:21:06.612 I ns/openshift-marketplace pod/certified-operators-vf5w4 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:21:07.978 W ns/e2e-emptydir-5015 pod/pod-dd66b42c-806e-4fe2-99d6-7df7d6689df7 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:21:11.266 W ns/e2e-emptydir-5015 pod/pod-dd66b42c-806e-4fe2-99d6-7df7d6689df7 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:21:11.436 W ns/openshift-operator-lifecycle-manager pod/packageserver-6bb6556b69-jpnn8 node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "https://10.128.5.10:5443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (5 times)
Sep 09 08:21:11.695 W ns/openshift-operator-lifecycle-manager pod/packageserver-6bb6556b69-jpnn8 node/ostest-5xqm8-master-0 reason/Unhealthy Readiness probe failed: Get "https://10.128.5.10:5443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (2 times)
Sep 09 08:21:11.728 I ns/openshift-marketplace pod/redhat-marketplace-ln9xv node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:21:12.804 I ns/e2e-configmap-883 pod/pod-configmaps-6c6619cf-368f-4c55-86a3-84ce02dfc2c4 reason/AddedInterface Add eth0 [10.128.182.123/23]
Sep 09 08:21:13.315 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ reason/Created
Sep 09 08:21:13.381 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:21:13.561 I ns/e2e-configmap-883 pod/pod-configmaps-6c6619cf-368f-4c55-86a3-84ce02dfc2c4 node/ostest-5xqm8-worker-0-cbbx9 container/env-test reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:21:13.847 I ns/e2e-configmap-883 pod/pod-configmaps-6c6619cf-368f-4c55-86a3-84ce02dfc2c4 node/ostest-5xqm8-worker-0-cbbx9 container/env-test reason/Created
Sep 09 08:21:13.962 I ns/e2e-configmap-883 pod/pod-configmaps-6c6619cf-368f-4c55-86a3-84ce02dfc2c4 node/ostest-5xqm8-worker-0-cbbx9 container/env-test reason/Started
Sep 09 08:21:15.199 W ns/e2e-configmap-883 pod/pod-configmaps-6c6619cf-368f-4c55-86a3-84ce02dfc2c4 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:21:17.814 I ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/community-operator-index:latest
Sep 09 08:21:17.889 W ns/e2e-webhook-1226 pod/sample-webhook-deployment-7bc8486f8c-mrn7z node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-mrn7z_e2e-webhook-1226_9fc488ac-b549-46b8-96c7-f559feac9331_0(34f9190fa86af3e22104bd36b03523a57a8532b7f76497a8e01f89c0cc995164): [e2e-webhook-1226/sample-webhook-deployment-7bc8486f8c-mrn7z:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:21:20.630 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (21 times)
Sep 09 08:21:21.420 W ns/openshift-operator-lifecycle-manager pod/packageserver-6bb6556b69-jpnn8 node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "https://10.128.5.10:5443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (6 times)
Sep 09 08:21:21.690 W ns/openshift-operator-lifecycle-manager pod/packageserver-6bb6556b69-jpnn8 node/ostest-5xqm8-master-0 reason/Unhealthy Readiness probe failed: Get "https://10.128.5.10:5443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (3 times)
Sep 09 08:21:22.556 W ns/e2e-configmap-883 pod/pod-configmaps-6c6619cf-368f-4c55-86a3-84ce02dfc2c4 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:21:24.302 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ reason/Created
Sep 09 08:21:24.493 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:21:27.541 W ns/e2e-emptydir-4964 pod/pod-1a3436d7-d449-4bbe-928c-57d5f004eb0a node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-1a3436d7-d449-4bbe-928c-57d5f004eb0a_e2e-emptydir-4964_3498f719-7503-445d-b5e8-663f3e65d9aa_0(b64314b00d7c8f5cb0118d9cbc8178997842e10b32daadabc1106570c9e14ffb): [e2e-emptydir-4964/pod-1a3436d7-d449-4bbe-928c-57d5f004eb0a:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:21:27.598 W ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fail-once-local-wxdqn_e2e-job-4997_18b14778-040d-4512-84a1-3249ae600461_0(92e7a713b19073f30f659f9fbafbc0f1b94faa87b3a01921a22207472fde1956): [e2e-job-4997/fail-once-local-wxdqn:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:21:27.609 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-5052_2b04a42f-f104-44ea-a3e7-e2e7b3638d26_0(03769f538b3c3448bec6db86ee4ff266150c4010d0a67e851c863520810b2359): [e2e-statefulset-5052/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:21:27.627 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(df9294c7310b2c5ad64b3b8edb7f9164cfef15785e3a5ed22a2d435796fbd438): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:21:27.654 W ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6_e2e-configmap-2627_2a213153-5a89-4ca2-a5e2-b8cb1397cc43_0(cc6408b18fe779c827d425539b16a09c9d22affad92c7b854d5a96889fe4fb62): [e2e-configmap-2627/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": read tcp 127.0.0.1:36946->127.0.0.1:5036: read: connection reset by peer
Sep 09 08:21:27.678 W ns/e2e-var-expansion-5559 pod/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326_e2e-var-expansion-5559_f3fcc96e-8ae6-4607-aa2f-60347739a1d6_0(01c1b36c9fca8fb89efa0f8dc7301a408ba7f27afd86ce6f1662f9dfa0c7172d): [e2e-var-expansion-5559/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:21:28.099 I ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Ready
Sep 09 08:21:28.099 W ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Restarted
Sep 09 08:21:28.767 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/NotReady
Sep 09 08:21:28.767 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Restarted
Sep 09 08:21:29.061 W clusteroperator/network changed Progressing to True: Deploying: DaemonSet "openshift-kuryr/kuryr-cni" is not available (awaiting 1 nodes)
Sep 09 08:21:29.848 I ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/community-operator-index:latest
Sep 09 08:21:30.009 W ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:21:30.114 I ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:21:30.180 I ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:21:31.131 E ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 container/liveness container exited with code 2 (Error): 
Sep 09 08:21:31.373 I ns/e2e-configmap-6866 pod/pod-configmaps-de95fe0b-0a1e-4bc4-b19c-285066dcc205 node/ reason/Created
Sep 09 08:21:31.493 I ns/e2e-configmap-6866 pod/pod-configmaps-de95fe0b-0a1e-4bc4-b19c-285066dcc205 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:21:37.051 I ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:21:37.867 I ns/e2e-job-4997 pod/fail-once-local-fn8r6 reason/AddedInterface Add eth0 [10.128.157.212/23]
Sep 09 08:21:38.613 I ns/e2e-job-4997 pod/fail-once-local-fn8r6 node/ostest-5xqm8-worker-0-rzx47 container/c reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:21:38.750 W ns/e2e-container-probe-4201 pod/liveness-86e2cfa5-15c9-4ebf-8bc3-34b62fac3d58 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:21:38.838 I ns/e2e-job-4997 pod/fail-once-local-fn8r6 node/ostest-5xqm8-worker-0-rzx47 container/c reason/Created
Sep 09 08:21:39.007 I ns/e2e-job-4997 pod/fail-once-local-fn8r6 node/ostest-5xqm8-worker-0-rzx47 container/c reason/Started
Sep 09 08:21:39.135 I ns/e2e-job-4997 pod/fail-once-local-fn8r6 node/ostest-5xqm8-worker-0-rzx47 container/c reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:21:39.143 E ns/e2e-job-4997 pod/fail-once-local-fn8r6 node/ostest-5xqm8-worker-0-rzx47 container/c container exited with code 1 (Error): 
Sep 09 08:21:39.449 I ns/e2e-job-4997 pod/fail-once-local-fn8r6 node/ostest-5xqm8-worker-0-rzx47 container/c reason/Created
Sep 09 08:21:39.586 I ns/e2e-job-4997 pod/fail-once-local-fn8r6 node/ostest-5xqm8-worker-0-rzx47 container/c reason/Started
Sep 09 08:21:39.730 I ns/e2e-statefulset-5052 pod/ss2-0 reason/AddedInterface Add eth0 [10.128.125.161/23]
Sep 09 08:21:39.861 I ns/e2e-emptydir-4964 pod/pod-1a3436d7-d449-4bbe-928c-57d5f004eb0a reason/AddedInterface Add eth0 [10.128.128.55/23]
Sep 09 08:21:40.165 W ns/e2e-job-4997 pod/fail-once-local-fn8r6 node/ostest-5xqm8-worker-0-rzx47 container/c reason/Restarted
Sep 09 08:21:40.628 I ns/e2e-emptydir-4964 pod/pod-1a3436d7-d449-4bbe-928c-57d5f004eb0a node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:21:40.660 I ns/e2e-var-expansion-5559 pod/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326 reason/AddedInterface Add eth0 [10.128.180.150/23]
Sep 09 08:21:40.675 I ns/e2e-job-4997 pod/fail-once-local-9vpkp node/ reason/Created
Sep 09 08:21:40.723 I ns/e2e-job-4997 job/fail-once-local reason/SuccessfulCreate Created pod: fail-once-local-9vpkp
Sep 09 08:21:40.736 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:21:40.784 I ns/e2e-job-4997 pod/fail-once-local-9vpkp node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:21:41.077 I ns/e2e-emptydir-4964 pod/pod-1a3436d7-d449-4bbe-928c-57d5f004eb0a node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Created
Sep 09 08:21:41.151 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:21:41.151 I ns/e2e-job-4997 pod/fail-once-local-fn8r6 node/ostest-5xqm8-worker-0-rzx47 reason/SandboxChanged Pod sandbox changed, it will be killed and re-created.
Sep 09 08:21:41.259 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:21:41.336 I ns/e2e-emptydir-4964 pod/pod-1a3436d7-d449-4bbe-928c-57d5f004eb0a node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Started
Sep 09 08:21:41.585 I ns/e2e-var-expansion-5559 pod/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326 node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:21:41.797 I ns/e2e-job-4997 pod/fail-once-local-wxdqn reason/AddedInterface Add eth0 [10.128.157.69/23]
Sep 09 08:21:41.941 I ns/e2e-job-4997 pod/fail-once-local-fn8r6 reason/AddedInterface Add eth0 [10.128.157.212/23]
Sep 09 08:21:42.314 I ns/e2e-var-expansion-5559 pod/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326 node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Created
Sep 09 08:21:42.492 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 08:21:42.951 I ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:21:43.146 I ns/e2e-var-expansion-5559 pod/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326 node/ostest-5xqm8-worker-0-cbbx9 container/dapi-container reason/Started
Sep 09 08:21:43.231 I ns/e2e-statefulset-5052 pod/ss2-1 node/ reason/Created
Sep 09 08:21:43.318 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulCreate create Pod ss2-1 in StatefulSet ss2 successful
Sep 09 08:21:43.350 I ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Created
Sep 09 08:21:43.380 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:21:43.429 I ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Started
Sep 09 08:21:43.432 W ns/e2e-emptydir-4964 pod/pod-1a3436d7-d449-4bbe-928c-57d5f004eb0a node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:21:43.840 I ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:21:43.942 W ns/e2e-webhook-1226 pod/sample-webhook-deployment-7bc8486f8c-mrn7z node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-mrn7z_e2e-webhook-1226_9fc488ac-b549-46b8-96c7-f559feac9331_0(1971224325c4fa848dfc476629a55b008d559ee0f49066ea1ca7a94a983d655e): [e2e-webhook-1226/sample-webhook-deployment-7bc8486f8c-mrn7z:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:21:43.983 E ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ostest-5xqm8-worker-0-cbbx9 container/c container exited with code 1 (Error): 
Sep 09 08:21:44.151 I ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Created
Sep 09 08:21:44.305 I ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Started
Sep 09 08:21:44.930 W ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Restarted
Sep 09 08:21:44.996 I ns/e2e-job-4997 pod/fail-once-local-z5p5k node/ reason/Created
Sep 09 08:21:45.066 I ns/e2e-job-4997 job/fail-once-local reason/SuccessfulCreate Created pod: fail-once-local-z5p5k
Sep 09 08:21:45.144 I ns/e2e-job-4997 pod/fail-once-local-z5p5k node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:21:45.551 W ns/e2e-var-expansion-5559 pod/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:21:45.812 W ns/e2e-emptydir-4964 pod/pod-1a3436d7-d449-4bbe-928c-57d5f004eb0a node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:21:45.853 I ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ostest-5xqm8-worker-0-cbbx9 reason/SandboxChanged Pod sandbox changed, it will be killed and re-created.
Sep 09 08:21:45.945 I ns/e2e-job-4997 pod/fail-once-local-9vpkp reason/AddedInterface Add eth0 [10.128.156.139/23]
Sep 09 08:21:46.640 I ns/e2e-job-4997 pod/fail-once-local-9vpkp node/ostest-5xqm8-worker-0-rzx47 container/c reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:21:46.867 I ns/e2e-job-4997 pod/fail-once-local-wxdqn reason/AddedInterface Add eth0 [10.128.157.69/23]
Sep 09 08:21:46.867 I ns/e2e-job-4997 pod/fail-once-local-9vpkp node/ostest-5xqm8-worker-0-rzx47 container/c reason/Created
Sep 09 08:21:46.933 I ns/e2e-job-4997 pod/fail-once-local-9vpkp node/ostest-5xqm8-worker-0-rzx47 container/c reason/Started
Sep 09 08:21:46.980 I ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Ready
Sep 09 08:21:47.228 I ns/e2e-job-4997 pod/fail-once-local-9vpkp node/ostest-5xqm8-worker-0-rzx47 container/c reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:21:47.246 W clusteroperator/network changed Progressing to False
Sep 09 08:21:47.290 E ns/e2e-job-4997 pod/fail-once-local-9vpkp node/ostest-5xqm8-worker-0-rzx47 container/c container exited with code 1 (Error): 
Sep 09 08:21:47.316 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ reason/Created
Sep 09 08:21:47.519 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:21:47.553 I ns/e2e-job-4997 pod/fail-once-local-9vpkp node/ostest-5xqm8-worker-0-rzx47 container/c reason/Created
Sep 09 08:21:47.608 I ns/e2e-job-4997 pod/fail-once-local-9vpkp node/ostest-5xqm8-worker-0-rzx47 container/c reason/Started
Sep 09 08:21:48.237 W ns/e2e-job-4997 pod/fail-once-local-9vpkp node/ostest-5xqm8-worker-0-rzx47 container/c reason/Restarted
Sep 09 08:21:48.790 I ns/e2e-containers-8110 pod/client-containers-7e3ddfd8-9ea7-4eb6-b4a4-ed0059ee04a0 node/ reason/Created
Sep 09 08:21:48.854 I ns/e2e-containers-8110 pod/client-containers-7e3ddfd8-9ea7-4eb6-b4a4-ed0059ee04a0 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:21:49.179 I ns/e2e-job-4997 pod/fail-once-local-9vpkp node/ostest-5xqm8-worker-0-rzx47 reason/SandboxChanged Pod sandbox changed, it will be killed and re-created.
Sep 09 08:21:50.013 I ns/e2e-job-4997 pod/fail-once-local-9vpkp reason/AddedInterface Add eth0 [10.128.156.139/23]
Sep 09 08:21:50.235 W ns/e2e-var-expansion-5559 pod/var-expansion-62813400-457a-4818-a0e5-f8ccec9b9326 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:21:52.193 I ns/e2e-job-4997 pod/fail-once-local-z5p5k reason/AddedInterface Add eth0 [10.128.157.135/23]
Sep 09 08:21:53.238 I ns/e2e-job-4997 pod/fail-once-local-z5p5k node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:21:53.366 I ns/e2e-projected-7399 pod/downwardapi-volume-b86ce00b-604f-4ea6-8999-7b645d5a2670 node/ reason/Created
Sep 09 08:21:53.373 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(d9bacfddcff63259197239393e19a25c0a868bd27841c6bcfe082382a03ce0f6): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:21:53.482 I ns/e2e-job-4997 pod/fail-once-local-z5p5k node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Created
Sep 09 08:21:53.560 I ns/e2e-projected-7399 pod/downwardapi-volume-b86ce00b-604f-4ea6-8999-7b645d5a2670 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:21:53.612 I ns/e2e-job-4997 pod/fail-once-local-z5p5k node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Started
Sep 09 08:21:53.917 I ns/e2e-job-4997 pod/fail-once-local-z5p5k node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:21:53.995 E ns/e2e-job-4997 pod/fail-once-local-z5p5k node/ostest-5xqm8-worker-0-cbbx9 container/c container exited with code 1 (Error): 
Sep 09 08:21:54.254 I ns/e2e-job-4997 pod/fail-once-local-z5p5k node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Created
Sep 09 08:21:54.363 I ns/e2e-job-4997 pod/fail-once-local-z5p5k node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Started
Sep 09 08:21:54.656 W ns/e2e-projected-7399 pod/downwardapi-volume-b86ce00b-604f-4ea6-8999-7b645d5a2670 node/ostest-5xqm8-worker-0-twrlr reason/FailedMount MountVolume.SetUp failed for volume "default-token-h6mb2" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:21:54.934 W ns/e2e-job-4997 pod/fail-once-local-z5p5k node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Restarted
Sep 09 08:21:54.964 I ns/e2e-job-4997 job/fail-once-local reason/Completed Job completed
Sep 09 08:21:55.911 I ns/e2e-job-4997 pod/fail-once-local-z5p5k node/ostest-5xqm8-worker-0-cbbx9 reason/SandboxChanged Pod sandbox changed, it will be killed and re-created.
Sep 09 04:21:56.633 I test="[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" failed
Sep 09 08:21:56.906 W ns/e2e-webhook-1226 pod/sample-webhook-deployment-7bc8486f8c-mrn7z node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:21:57.653 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 reason/AddedInterface Add eth0 [10.128.118.252/23]
Sep 09 08:21:58.266 I ns/e2e-downward-api-6089 pod/downwardapi-volume-11e21131-d493-4c5c-8a8c-9963e18c9523 node/ reason/Created
Sep 09 08:21:58.350 W ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 reason/Failed Error: missing value for ANNOTATION
Sep 09 08:21:58.397 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:21:58.406 I ns/e2e-configmap-6838 pod/pod-configmaps-50106322-6a82-4583-b8d4-9fe81e96d7eb node/ reason/Created
Sep 09 08:21:58.429 I ns/e2e-downward-api-6089 pod/downwardapi-volume-11e21131-d493-4c5c-8a8c-9963e18c9523 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:21:58.535 I ns/e2e-configmap-6838 pod/pod-configmaps-50106322-6a82-4583-b8d4-9fe81e96d7eb node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:21:58.582 W ns/e2e-projected-7197 pod/pod-projected-configmaps-1292fdb1-09e8-4c37-a880-c6aaca2bfcb1 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:21:59.275 W ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 reason/Failed Error: missing value for ANNOTATION (2 times)
Sep 09 08:21:59.299 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:22:00.296 W ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 reason/Failed Error: missing value for ANNOTATION (3 times)
Sep 09 08:22:00.327 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:22:00.357 W ns/e2e-projected-7197 pod/pod-projected-configmaps-1292fdb1-09e8-4c37-a880-c6aaca2bfcb1 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:22:00.357 W ns/e2e-projected-7197 pod/pod-projected-configmaps-1292fdb1-09e8-4c37-a880-c6aaca2bfcb1 node/ostest-5xqm8-worker-0-rzx47 container/projected-configmap-volume-test reason/NotReady
Sep 09 08:22:00.450 W ns/e2e-webhook-1226 pod/sample-webhook-deployment-7bc8486f8c-mrn7z node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:22:03.670 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 reason/AddedInterface Add eth0 [10.128.179.214/23]
Sep 09 08:22:03.969 W ns/e2e-projected-7197 pod/pod-projected-configmaps-1292fdb1-09e8-4c37-a880-c6aaca2bfcb1 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:22:04.300 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/delcm-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:22:04.615 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/delcm-volume-test reason/Created
Sep 09 08:22:04.648 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/delcm-volume-test reason/Started
Sep 09 08:22:04.691 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/updcm-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:22:04.977 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/updcm-volume-test reason/Created
Sep 09 08:22:05.050 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/updcm-volume-test reason/Started
Sep 09 08:22:05.069 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/createcm-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:22:05.327 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/createcm-volume-test reason/Created
Sep 09 08:22:05.397 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/createcm-volume-test reason/Started
Sep 09 08:22:06.014 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/updcm-volume-test reason/Ready
Sep 09 08:22:06.014 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/delcm-volume-test reason/Ready
Sep 09 08:22:06.014 I ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/createcm-volume-test reason/Ready
Sep 09 08:22:06.859 W ns/e2e-job-4997 pod/fail-once-local-9vpkp node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:22:06.892 W ns/e2e-job-4997 pod/fail-once-local-fn8r6 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:22:06.964 W ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:22:07.009 W ns/e2e-job-4997 pod/fail-once-local-z5p5k node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:22:07.257 W ns/e2e-projected-7197 pod/pod-projected-configmaps-1292fdb1-09e8-4c37-a880-c6aaca2bfcb1 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:22:08.844 W ns/e2e-job-4997 pod/fail-once-local-9vpkp node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:22:08.857 W ns/e2e-job-4997 pod/fail-once-local-fn8r6 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:22:09.010 W ns/e2e-job-4997 pod/fail-once-local-wxdqn node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:22:09.093 W ns/e2e-job-4997 pod/fail-once-local-z5p5k node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:22:09.348 I ns/e2e-statefulset-5052 pod/ss2-1 reason/AddedInterface Add eth0 [10.128.124.18/23]
Sep 09 08:22:09.970 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:22:10.213 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Created
Sep 09 08:22:10.272 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Started
Sep 09 08:22:10.377 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:22:10.599 I ns/e2e-statefulset-5052 pod/ss2-2 node/ reason/Created
Sep 09 08:22:10.653 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulCreate create Pod ss2-2 in StatefulSet ss2 successful
Sep 09 08:22:10.694 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:22:11.861 I ns/e2e-secrets-4611 pod/pod-secrets-8d7d490f-82a7-417d-bf3c-0c14c8629513 node/ reason/Created
Sep 09 08:22:11.902 I ns/e2e-secrets-4611 pod/pod-secrets-8d7d490f-82a7-417d-bf3c-0c14c8629513 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:22:14.222 W ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 reason/Failed Error: missing value for ANNOTATION (4 times)
Sep 09 08:22:14.251 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:22:16.757 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(571a56938248f204bf11d90521667aef54dafc8825ab19c9d7923e2ecd11122f): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:22:18.457 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec reason/AddedInterface Add eth0 [10.128.164.139/23]
Sep 09 08:22:18.968 - 44s   W ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:22:19.155 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:22:19.394 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:22:19.488 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:22:19.504 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:22:19.594 I ns/e2e-configmap-6866 pod/pod-configmaps-de95fe0b-0a1e-4bc4-b19c-285066dcc205 reason/AddedInterface Add eth0 [10.128.187.69/23]
Sep 09 08:22:19.779 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Created
Sep 09 08:22:19.850 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Started
Sep 09 08:22:19.889 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Pulled image/gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0
Sep 09 08:22:20.165 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Created
Sep 09 08:22:20.222 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Started
Sep 09 08:22:20.365 I ns/e2e-configmap-6866 pod/pod-configmaps-de95fe0b-0a1e-4bc4-b19c-285066dcc205 node/ostest-5xqm8-worker-0-rzx47 container/configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:22:20.702 I ns/e2e-configmap-6866 pod/pod-configmaps-de95fe0b-0a1e-4bc4-b19c-285066dcc205 node/ostest-5xqm8-worker-0-rzx47 container/configmap-volume-test reason/Created
Sep 09 08:22:20.771 I ns/e2e-configmap-6866 pod/pod-configmaps-de95fe0b-0a1e-4bc4-b19c-285066dcc205 node/ostest-5xqm8-worker-0-rzx47 container/configmap-volume-test reason/Started
Sep 09 08:22:21.060 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 08:22:21.060 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Ready
Sep 09 08:22:21.060 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Ready
Sep 09 08:22:21.482 W ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:22:22.323 W ns/e2e-configmap-6866 pod/pod-configmaps-de95fe0b-0a1e-4bc4-b19c-285066dcc205 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:22:23.091 W ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:22:23.091 W ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/createcm-volume-test reason/NotReady
Sep 09 08:22:23.091 W ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/delcm-volume-test reason/NotReady
Sep 09 08:22:23.091 W ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 container/updcm-volume-test reason/NotReady
Sep 09 08:22:23.460 I ns/e2e-projected-7399 pod/downwardapi-volume-b86ce00b-604f-4ea6-8999-7b645d5a2670 reason/AddedInterface Add eth0 [10.128.190.153/23]
Sep 09 08:22:24.200 I ns/e2e-projected-7399 pod/downwardapi-volume-b86ce00b-604f-4ea6-8999-7b645d5a2670 node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:22:24.553 I ns/e2e-projected-7399 pod/downwardapi-volume-b86ce00b-604f-4ea6-8999-7b645d5a2670 node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Created
Sep 09 08:22:24.678 I ns/e2e-projected-7399 pod/downwardapi-volume-b86ce00b-604f-4ea6-8999-7b645d5a2670 node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Started
Sep 09 08:22:25.850 W ns/e2e-projected-7399 pod/downwardapi-volume-b86ce00b-604f-4ea6-8999-7b645d5a2670 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:22:26.799 W ns/e2e-configmap-2627 pod/pod-configmaps-b515e52c-8d6a-444f-9c61-ef8a1b2894a6 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:22:27.068 W ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:22:27.098 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Killing
Sep 09 08:22:27.114 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Killing
Sep 09 08:22:27.133 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Killing
Sep 09 08:22:27.229 W ns/e2e-configmap-6866 pod/pod-configmaps-de95fe0b-0a1e-4bc4-b19c-285066dcc205 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:22:28.213 W ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 reason/Failed Error: missing value for ANNOTATION (5 times)
Sep 09 08:22:28.240 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:22:28.651 I ns/e2e-configmap-6838 pod/pod-configmaps-50106322-6a82-4583-b8d4-9fe81e96d7eb reason/AddedInterface Add eth0 [10.128.126.206/23]
Sep 09 08:22:29.065 I ns/e2e-webhook-6508 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 08:22:29.270 W ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:22:29.302 I ns/e2e-webhook-6508 pod/sample-webhook-deployment-7bc8486f8c-7xz67 node/ reason/Created
Sep 09 08:22:29.310 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Killing
Sep 09 08:22:29.387 I ns/e2e-configmap-6838 pod/pod-configmaps-50106322-6a82-4583-b8d4-9fe81e96d7eb node/ostest-5xqm8-worker-0-rzx47 container/configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:22:29.389 W ns/e2e-projected-7399 pod/downwardapi-volume-b86ce00b-604f-4ea6-8999-7b645d5a2670 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:22:29.401 I ns/e2e-dns-2243 pod/dns-test-47a1a35a-d5b9-456b-b6ec-e6b2b48b1dec node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Killing
Sep 09 08:22:29.423 I ns/e2e-webhook-6508 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-7xz67
Sep 09 08:22:29.454 I ns/e2e-downward-api-6089 pod/downwardapi-volume-11e21131-d493-4c5c-8a8c-9963e18c9523 reason/AddedInterface Add eth0 [10.128.176.220/23]
Sep 09 08:22:29.531 I ns/e2e-webhook-6508 pod/sample-webhook-deployment-7bc8486f8c-7xz67 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:22:29.707 I ns/e2e-configmap-6838 pod/pod-configmaps-50106322-6a82-4583-b8d4-9fe81e96d7eb node/ostest-5xqm8-worker-0-rzx47 container/configmap-volume-test reason/Created
Sep 09 08:22:29.765 I ns/e2e-configmap-6838 pod/pod-configmaps-50106322-6a82-4583-b8d4-9fe81e96d7eb node/ostest-5xqm8-worker-0-rzx47 container/configmap-volume-test reason/Started
Sep 09 08:22:29.967 I ns/e2e-pods-3875 pod/pod-submit-remove-a028997e-a60c-4efd-ac90-9e602db5819a node/ reason/Created
Sep 09 08:22:30.033 I ns/e2e-pods-3875 pod/pod-submit-remove-a028997e-a60c-4efd-ac90-9e602db5819a node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:22:30.209 I ns/e2e-downward-api-6089 pod/downwardapi-volume-11e21131-d493-4c5c-8a8c-9963e18c9523 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:22:30.488 I ns/e2e-configmap-6838 pod/pod-configmaps-50106322-6a82-4583-b8d4-9fe81e96d7eb node/ostest-5xqm8-worker-0-rzx47 container/configmap-volume-test reason/Ready
Sep 09 08:22:30.611 I ns/e2e-downward-api-6089 pod/downwardapi-volume-11e21131-d493-4c5c-8a8c-9963e18c9523 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Created
Sep 09 08:22:30.737 I ns/e2e-downward-api-6089 pod/downwardapi-volume-11e21131-d493-4c5c-8a8c-9963e18c9523 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Started
Sep 09 08:22:30.754 W ns/e2e-webhook-6508 pod/sample-webhook-deployment-7bc8486f8c-7xz67 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedMount MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:22:30.818 W ns/e2e-webhook-6508 pod/sample-webhook-deployment-7bc8486f8c-7xz67 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedMount MountVolume.SetUp failed for volume "default-token-v9kz7" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:22:31.225 I ns/e2e-secrets-3602 pod/pod-secrets-bbf9ed4b-b813-47f4-aa1e-c2a8e2dcb709 node/ reason/Created
Sep 09 08:22:31.288 I ns/e2e-secrets-3602 pod/pod-secrets-bbf9ed4b-b813-47f4-aa1e-c2a8e2dcb709 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:22:32.879 W ns/e2e-downward-api-6089 pod/downwardapi-volume-11e21131-d493-4c5c-8a8c-9963e18c9523 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:22:34.365 I ns/e2e-statefulset-5052 pod/ss2-2 reason/AddedInterface Add eth0 [10.128.125.171/23]
Sep 09 08:22:35.107 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:22:35.266 I ns/e2e-webhook-4689 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 08:22:35.540 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Created
Sep 09 08:22:35.652 I ns/e2e-webhook-4689 pod/sample-webhook-deployment-7bc8486f8c-cfqq7 node/ reason/Created
Sep 09 08:22:35.654 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Started
Sep 09 08:22:35.807 I ns/e2e-webhook-4689 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-cfqq7
Sep 09 08:22:35.887 I ns/e2e-webhook-4689 pod/sample-webhook-deployment-7bc8486f8c-cfqq7 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:22:36.041 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Ready
Sep 09 08:22:36.133 I ns/e2e-containers-8110 pod/client-containers-7e3ddfd8-9ea7-4eb6-b4a4-ed0059ee04a0 reason/AddedInterface Add eth0 [10.128.140.163/23]
Sep 09 08:22:36.755 W ns/e2e-downward-api-6089 pod/downwardapi-volume-11e21131-d493-4c5c-8a8c-9963e18c9523 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:22:36.960 I ns/e2e-containers-8110 pod/client-containers-7e3ddfd8-9ea7-4eb6-b4a4-ed0059ee04a0 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:22:36.972 W ns/e2e-webhook-4689 pod/sample-webhook-deployment-7bc8486f8c-cfqq7 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedMount MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:22:37.004 W ns/e2e-webhook-4689 pod/sample-webhook-deployment-7bc8486f8c-cfqq7 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedMount MountVolume.SetUp failed for volume "default-token-h2xcs" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:22:37.235 I ns/e2e-containers-8110 pod/client-containers-7e3ddfd8-9ea7-4eb6-b4a4-ed0059ee04a0 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:22:37.298 I ns/e2e-containers-8110 pod/client-containers-7e3ddfd8-9ea7-4eb6-b4a4-ed0059ee04a0 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:22:37.355 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404
Sep 09 08:22:37.388 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/NotReady
Sep 09 08:22:38.295 I ns/e2e-subpath-2103 pod/pod-subpath-test-configmap-8xrd node/ reason/Created
Sep 09 08:22:38.335 I ns/e2e-subpath-2103 pod/pod-subpath-test-configmap-8xrd node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:22:38.360 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (2 times)
Sep 09 08:22:39.347 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (3 times)
Sep 09 08:22:39.568 W ns/e2e-containers-8110 pod/client-containers-7e3ddfd8-9ea7-4eb6-b4a4-ed0059ee04a0 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:22:39.625 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(3b9e2df3d9dcb8947c88829497357c79a81bb9e5467cd5007717cec0e39e7bb0): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500  (2 times)
Sep 09 08:22:40.350 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (4 times)
Sep 09 08:22:41.211 W ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 reason/Failed Error: missing value for ANNOTATION (6 times)
Sep 09 08:22:41.234 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:22:41.351 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (5 times)
Sep 09 08:22:42.352 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (6 times)
Sep 09 08:22:42.458 W ns/e2e-containers-8110 pod/client-containers-7e3ddfd8-9ea7-4eb6-b4a4-ed0059ee04a0 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:22:43.350 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (7 times)
Sep 09 08:22:44.387 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (8 times)
Sep 09 08:22:44.801 I ns/e2e-container-probe-9635 pod/test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de node/ reason/Created
Sep 09 08:22:44.996 I ns/e2e-container-probe-9635 pod/test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:22:45.432 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (9 times)
Sep 09 08:22:46.352 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (10 times)
Sep 09 08:22:47.010 I ns/e2e-secrets-4611 pod/pod-secrets-8d7d490f-82a7-417d-bf3c-0c14c8629513 reason/AddedInterface Add eth0 [10.128.144.79/23]
Sep 09 08:22:47.356 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (11 times)
Sep 09 08:22:47.755 I ns/e2e-secrets-4611 pod/pod-secrets-8d7d490f-82a7-417d-bf3c-0c14c8629513 node/ostest-5xqm8-worker-0-rzx47 container/secret-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:22:47.817 W ns/e2e-configmap-6838 pod/pod-configmaps-50106322-6a82-4583-b8d4-9fe81e96d7eb node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:22:48.059 I ns/e2e-secrets-4611 pod/pod-secrets-8d7d490f-82a7-417d-bf3c-0c14c8629513 node/ostest-5xqm8-worker-0-rzx47 container/secret-volume-test reason/Created
Sep 09 08:22:48.168 I ns/e2e-secrets-4611 pod/pod-secrets-8d7d490f-82a7-417d-bf3c-0c14c8629513 node/ostest-5xqm8-worker-0-rzx47 container/secret-volume-test reason/Started
Sep 09 08:22:48.366 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (12 times)
Sep 09 08:22:49.348 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (13 times)
Sep 09 08:22:49.560 W ns/e2e-configmap-6838 pod/pod-configmaps-50106322-6a82-4583-b8d4-9fe81e96d7eb node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:22:49.560 W ns/e2e-configmap-6838 pod/pod-configmaps-50106322-6a82-4583-b8d4-9fe81e96d7eb node/ostest-5xqm8-worker-0-rzx47 container/configmap-volume-test reason/NotReady
Sep 09 08:22:50.356 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (14 times)
Sep 09 08:22:50.460 W ns/e2e-secrets-4611 pod/pod-secrets-8d7d490f-82a7-417d-bf3c-0c14c8629513 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:22:51.344 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (15 times)
Sep 09 08:22:52.348 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (16 times)
Sep 09 08:22:53.205 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:22:53.211 W ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 reason/Failed Error: missing value for ANNOTATION (7 times)
Sep 09 08:22:53.361 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (17 times)
Sep 09 08:22:54.378 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (18 times)
Sep 09 08:22:55.078 W ns/e2e-secrets-4611 pod/pod-secrets-8d7d490f-82a7-417d-bf3c-0c14c8629513 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:22:55.349 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (19 times)
Sep 09 08:22:55.989 W ns/e2e-subpath-2103 pod/pod-subpath-test-configmap-8xrd node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-subpath-test-configmap-8xrd_e2e-subpath-2103_0af23fb5-d75f-44e1-b6d8-16ae23f7f3dc_0(3e20b93e4dcb55bcc2a9933b5210bcfbc2a6ef43822d3c61a8a4974693cdf411): [e2e-subpath-2103/pod-subpath-test-configmap-8xrd:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:22:56.030 W ns/e2e-container-probe-9635 pod/test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de_e2e-container-probe-9635_f86e9519-71fb-443d-954b-694baac46b98_0(5c9e1e12d6e628cdb23522c37ebcffd27f905a6d89a0ce77f93c1f6fe76dfb50): [e2e-container-probe-9635/test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:22:56.353 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (20 times)
Sep 09 08:22:56.714 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/NotReady
Sep 09 08:22:56.714 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/Restarted
Sep 09 08:22:57.319 W ns/e2e-configmap-6838 pod/pod-configmaps-50106322-6a82-4583-b8d4-9fe81e96d7eb node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:22:57.363 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (21 times)
Sep 09 08:22:57.404 W clusteroperator/network changed Progressing to True: Deploying: DaemonSet "openshift-kuryr/kuryr-cni" is not available (awaiting 1 nodes)
Sep 09 08:22:57.924 I ns/e2e-downward-api-3425 pod/downwardapi-volume-8001c825-592e-4c93-84c1-c6b2671cf6c2 node/ reason/Created
Sep 09 08:22:57.970 I ns/e2e-downward-api-3425 pod/downwardapi-volume-8001c825-592e-4c93-84c1-c6b2671cf6c2 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:22:58.376 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:22:58.437 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:22:58.472 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Killing
Sep 09 08:22:58.498 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulDelete delete Pod ss2-2 in StatefulSet ss2 successful
Sep 09 08:22:58.934 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: Get "http://10.128.125.171:80/index.html": dial tcp 10.128.125.171:80: connect: connection refused
Sep 09 08:22:58.942 W ns/e2e-downward-api-3425 pod/downwardapi-volume-8001c825-592e-4c93-84c1-c6b2671cf6c2 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downwardapi-volume-8001c825-592e-4c93-84c1-c6b2671cf6c2_e2e-downward-api-3425_13d104ac-5938-4ba1-bea8-eccfb55754e7_0(de9b64294a978fbc495551687f7c0ac11acae14375637f8f7dfc0a80911deeb6): [e2e-downward-api-3425/downwardapi-volume-8001c825-592e-4c93-84c1-c6b2671cf6c2:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": dial tcp [::1]:5036: connect: connection refused
Sep 09 08:22:58.949 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/NotReady
Sep 09 08:23:01.163 I ns/e2e-webhook-6508 pod/sample-webhook-deployment-7bc8486f8c-7xz67 reason/AddedInterface Add eth0 [10.128.168.191/23]
Sep 09 08:23:01.712 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:23:01.770 I ns/e2e-statefulset-5052 pod/ss2-2 node/ reason/Created
Sep 09 08:23:01.817 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulCreate create Pod ss2-2 in StatefulSet ss2 successful (2 times)
Sep 09 08:23:01.850 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:23:02.103 I ns/e2e-webhook-6508 pod/sample-webhook-deployment-7bc8486f8c-7xz67 node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:23:02.385 I ns/e2e-webhook-6508 pod/sample-webhook-deployment-7bc8486f8c-7xz67 node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Created
Sep 09 08:23:02.502 I ns/e2e-webhook-6508 pod/sample-webhook-deployment-7bc8486f8c-7xz67 node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Started
Sep 09 08:23:03.721 I ns/e2e-webhook-6508 pod/sample-webhook-deployment-7bc8486f8c-7xz67 node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Ready
Sep 09 08:23:04.890 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(75beb4aaab0d890c421159b6a57b6a55598bbf2d59a0f5cb5b3cb7e5e6906b1e): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500  (3 times)
Sep 09 08:23:07.212 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:23:07.242 W ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 reason/Failed Error: missing value for ANNOTATION (8 times)
Sep 09 08:23:07.262 W ns/e2e-webhook-6508 pod/sample-webhook-deployment-7bc8486f8c-7xz67 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:23:08.136 I ns/e2e-statefulset-5052 pod/ss2-2 reason/AddedInterface Add eth0 [10.128.125.95/23]
Sep 09 08:23:08.321 E ns/e2e-webhook-6508 pod/sample-webhook-deployment-7bc8486f8c-7xz67 node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook container exited with code 2 (Error): 
Sep 09 08:23:10.010 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Pulling image/docker.io/library/httpd:2.4.39-alpine
Sep 09 08:23:10.301 W ns/e2e-webhook-6508 pod/sample-webhook-deployment-7bc8486f8c-7xz67 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:23:10.385 I ns/e2e-webhook-4689 pod/sample-webhook-deployment-7bc8486f8c-cfqq7 reason/AddedInterface Add eth0 [10.128.205.34/23]
Sep 09 08:23:11.020 I ns/e2e-webhook-4689 pod/sample-webhook-deployment-7bc8486f8c-cfqq7 node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:23:11.344 I ns/e2e-webhook-4689 pod/sample-webhook-deployment-7bc8486f8c-cfqq7 node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Created
Sep 09 08:23:11.382 I ns/e2e-webhook-4689 pod/sample-webhook-deployment-7bc8486f8c-cfqq7 node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Started
Sep 09 08:23:11.704 I ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/Ready
Sep 09 08:23:11.864 W clusteroperator/network changed Progressing to False
Sep 09 08:23:12.657 I ns/e2e-kubectl-1664 deployment/frontend reason/ScalingReplicaSet Scaled up replica set frontend-7c7f745c7 to 3
Sep 09 08:23:12.714 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-lr86x node/ reason/Created
Sep 09 08:23:12.785 I ns/e2e-kubectl-1664 replicaset/frontend-7c7f745c7 reason/SuccessfulCreate Created pod: frontend-7c7f745c7-lr86x
Sep 09 08:23:12.812 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-lr86x node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:23:12.841 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-5bzgd node/ reason/Created
Sep 09 08:23:12.862 I ns/e2e-kubectl-1664 replicaset/frontend-7c7f745c7 reason/SuccessfulCreate Created pod: frontend-7c7f745c7-5bzgd
Sep 09 08:23:12.880 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-qskln node/ reason/Created
Sep 09 08:23:12.886 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-5bzgd node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:23:12.906 I ns/e2e-kubectl-1664 replicaset/frontend-7c7f745c7 reason/SuccessfulCreate Created pod: frontend-7c7f745c7-qskln
Sep 09 08:23:12.914 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-qskln node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:23:13.158 I ns/e2e-webhook-4689 pod/sample-webhook-deployment-7bc8486f8c-cfqq7 node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Ready
Sep 09 08:23:13.530 I ns/e2e-kubectl-1664 deployment/agnhost-primary reason/ScalingReplicaSet Scaled up replica set agnhost-primary-c97587cb5 to 1
Sep 09 08:23:13.591 I ns/e2e-kubectl-1664 pod/agnhost-primary-c97587cb5-qpsjn node/ reason/Created
Sep 09 08:23:13.611 I ns/e2e-kubectl-1664 replicaset/agnhost-primary-c97587cb5 reason/SuccessfulCreate Created pod: agnhost-primary-c97587cb5-qpsjn
Sep 09 08:23:13.704 I ns/e2e-kubectl-1664 pod/agnhost-primary-c97587cb5-qpsjn node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:23:13.947 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:23:14.334 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Created
Sep 09 08:23:14.372 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Started
Sep 09 08:23:14.445 I ns/e2e-kubectl-1664 deployment/agnhost-replica reason/ScalingReplicaSet Scaled up replica set agnhost-replica-98d447897 to 2
Sep 09 08:23:14.529 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ reason/Created
Sep 09 08:23:14.871 I ns/e2e-kubectl-1664 replicaset/agnhost-replica-98d447897 reason/SuccessfulCreate Created pod: agnhost-replica-98d447897-l4hbz
Sep 09 08:23:14.871 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:23:14.929 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Ready
Sep 09 08:23:14.978 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ reason/Created
Sep 09 08:23:15.009 I ns/e2e-kubectl-1664 replicaset/agnhost-replica-98d447897 reason/SuccessfulCreate Created pod: agnhost-replica-98d447897-dk6tf
Sep 09 08:23:15.015 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:23:15.999 W ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:23:16.062 I ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/Killing
Sep 09 08:23:16.377 W ns/e2e-webhook-4689 pod/sample-webhook-deployment-7bc8486f8c-cfqq7 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:23:17.525 E ns/e2e-webhook-4689 pod/sample-webhook-deployment-7bc8486f8c-cfqq7 node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook container exited with code 2 (Error): 
Sep 09 08:23:17.943 I ns/e2e-container-probe-9635 pod/test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de reason/AddedInterface Add eth0 [10.128.136.189/23]
Sep 09 08:23:18.248 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ reason/Created
Sep 09 08:23:18.391 I ns/e2e-pods-3875 pod/pod-submit-remove-a028997e-a60c-4efd-ac90-9e602db5819a reason/AddedInterface Add eth0 [10.128.170.54/23]
Sep 09 08:23:18.764 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:23:18.792 I ns/e2e-container-probe-9635 pod/test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de node/ostest-5xqm8-worker-0-rzx47 container/test-webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:23:18.870 I ns/e2e-container-probe-9635 pod/test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de node/ostest-5xqm8-worker-0-rzx47 container/test-webserver reason/Created
Sep 09 08:23:18.908 I ns/e2e-container-probe-9635 pod/test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de node/ostest-5xqm8-worker-0-rzx47 container/test-webserver reason/Started
Sep 09 08:23:19.314 I ns/e2e-pods-3875 pod/pod-submit-remove-a028997e-a60c-4efd-ac90-9e602db5819a node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Pulled image/docker.io/library/nginx:1.14-alpine
Sep 09 08:23:19.612 I ns/e2e-pods-3875 pod/pod-submit-remove-a028997e-a60c-4efd-ac90-9e602db5819a node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Created
Sep 09 08:23:19.680 I ns/e2e-pods-3875 pod/pod-submit-remove-a028997e-a60c-4efd-ac90-9e602db5819a node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Started
Sep 09 08:23:20.367 I ns/e2e-pods-3875 pod/pod-submit-remove-a028997e-a60c-4efd-ac90-9e602db5819a node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Ready
Sep 09 08:23:22.081 W ns/e2e-pods-3875 pod/pod-submit-remove-a028997e-a60c-4efd-ac90-9e602db5819a node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:23:22.369 I ns/e2e-pods-3875 pod/pod-submit-remove-a028997e-a60c-4efd-ac90-9e602db5819a node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Killing
Sep 09 08:23:22.485 I ns/e2e-secrets-3602 pod/pod-secrets-bbf9ed4b-b813-47f4-aa1e-c2a8e2dcb709 reason/AddedInterface Add eth0 [10.128.206.17/23]
Sep 09 08:23:22.759 W ns/e2e-webhook-4689 pod/sample-webhook-deployment-7bc8486f8c-cfqq7 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:23:23.349 I ns/e2e-secrets-3602 pod/pod-secrets-bbf9ed4b-b813-47f4-aa1e-c2a8e2dcb709 node/ostest-5xqm8-worker-0-cbbx9 container/secret-env-test reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:23:23.692 I ns/e2e-secrets-3602 pod/pod-secrets-bbf9ed4b-b813-47f4-aa1e-c2a8e2dcb709 node/ostest-5xqm8-worker-0-cbbx9 container/secret-env-test reason/Created
Sep 09 08:23:23.808 I ns/e2e-secrets-3602 pod/pod-secrets-bbf9ed4b-b813-47f4-aa1e-c2a8e2dcb709 node/ostest-5xqm8-worker-0-cbbx9 container/secret-env-test reason/Started
Sep 09 08:23:24.934 I ns/e2e-subpath-2103 pod/pod-subpath-test-configmap-8xrd reason/AddedInterface Add eth0 [10.128.193.0/23]
Sep 09 08:23:25.628 I ns/e2e-subpath-2103 pod/pod-subpath-test-configmap-8xrd node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-configmap-8xrd reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:23:25.858 W ns/e2e-secrets-3602 pod/pod-secrets-bbf9ed4b-b813-47f4-aa1e-c2a8e2dcb709 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:23:25.946 I ns/e2e-subpath-2103 pod/pod-subpath-test-configmap-8xrd node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-configmap-8xrd reason/Created
Sep 09 08:23:25.990 I ns/e2e-subpath-2103 pod/pod-subpath-test-configmap-8xrd node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-configmap-8xrd reason/Started
Sep 09 08:23:26.302 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Pulled image/docker.io/library/httpd:2.4.39-alpine
Sep 09 08:23:26.619 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Created
Sep 09 08:23:26.685 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Started
Sep 09 08:23:26.766 I ns/e2e-subpath-2103 pod/pod-subpath-test-configmap-8xrd node/ostest-5xqm8-worker-0-rzx47 container/test-container-subpath-configmap-8xrd reason/Ready
Sep 09 08:23:26.802 W ns/e2e-pods-3875 pod/pod-submit-remove-a028997e-a60c-4efd-ac90-9e602db5819a node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:23:27.609 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Ready
Sep 09 08:23:27.821 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(43b632befdaa4c121443dd38ccfad56ddf722b2ee27087371d2342c5bf5b51f3): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500  (4 times)
Sep 09 08:23:27.821 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:23:27.822 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Killing
Sep 09 08:23:27.863 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulDelete delete Pod ss2-1 in StatefulSet ss2 successful
Sep 09 08:23:27.867 W ns/e2e-statefulset-5052 service/test reason/FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service e2e-statefulset-5052/test: Error updating test-pghbn EndpointSlice for Service e2e-statefulset-5052/test: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "test-pghbn": the object has been modified; please apply your changes to the latest version and try again
Sep 09 08:23:28.017 I ns/e2e-gc-3422 deployment/simpletest.deployment reason/ScalingReplicaSet Scaled up replica set simpletest.deployment-7f7555f8bc to 2
Sep 09 08:23:28.144 I ns/e2e-gc-3422 pod/simpletest.deployment-7f7555f8bc-5hj5q node/ reason/Created
Sep 09 08:23:28.205 I ns/e2e-gc-3422 replicaset/simpletest.deployment-7f7555f8bc reason/SuccessfulCreate Created pod: simpletest.deployment-7f7555f8bc-5hj5q
Sep 09 08:23:28.254 I ns/e2e-gc-3422 pod/simpletest.deployment-7f7555f8bc-5hj5q node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:23:28.255 I ns/e2e-gc-3422 pod/simpletest.deployment-7f7555f8bc-m2sk5 node/ reason/Created
Sep 09 08:23:28.343 I ns/e2e-gc-3422 replicaset/simpletest.deployment-7f7555f8bc reason/SuccessfulCreate Created pod: simpletest.deployment-7f7555f8bc-m2sk5
Sep 09 08:23:28.412 I ns/e2e-gc-3422 pod/simpletest.deployment-7f7555f8bc-m2sk5 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:23:34.902 I ns/e2e-resourcequota-1522 pod/test-pod node/ reason/Created
Sep 09 08:23:34.980 W ns/e2e-resourcequota-1522 pod/test-pod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 08:23:35.065 W ns/e2e-resourcequota-1522 pod/test-pod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 08:23:36.002 W ns/e2e-secrets-3602 pod/pod-secrets-bbf9ed4b-b813-47f4-aa1e-c2a8e2dcb709 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:23:37.939 W ns/e2e-resourcequota-1522 pod/test-pod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 08:23:38.972 W ns/e2e-resourcequota-1522 pod/test-pod node/ reason/GracefulDelete in 0s
Sep 09 08:23:38.989 W ns/e2e-resourcequota-1522 pod/test-pod node/ reason/Deleted
Sep 09 08:23:39.081 W ns/e2e-resourcequota-1522 pod/test-pod reason/FailedScheduling skip schedule deleting pod: e2e-resourcequota-1522/test-pod
Sep 09 08:23:39.354 I ns/e2e-events-2837 / reason/Test This is a test event
Sep 09 08:23:39.443 I ns/e2e-events-2837 / reason/Test This is a test event - patched
Sep 09 08:23:41.127 I ns/e2e-resourcequota-1522 pod/terminating-pod node/ reason/Created
Sep 09 08:23:41.198 W ns/e2e-resourcequota-1522 pod/terminating-pod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 08:23:41.247 W ns/e2e-resourcequota-1522 pod/terminating-pod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 08:23:42.946 W ns/e2e-gc-3422 pod/simpletest.deployment-7f7555f8bc-m2sk5 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:23:42.977 W ns/e2e-gc-3422 pod/simpletest.deployment-7f7555f8bc-5hj5q node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:23:43.035 I ns/e2e-container-probe-9635 pod/test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de node/ostest-5xqm8-worker-0-rzx47 container/test-webserver reason/Ready
Sep 09 08:23:43.901 W ns/e2e-resourcequota-1522 pod/terminating-pod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 08:23:44.274 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:23:44.616 W ns/e2e-gc-3422 pod/simpletest.deployment-7f7555f8bc-m2sk5 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:23:44.785 I ns/e2e-statefulset-5052 pod/ss2-1 node/ reason/Created
Sep 09 08:23:44.851 W ns/e2e-gc-3422 pod/simpletest.deployment-7f7555f8bc-5hj5q node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:23:45.149 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulCreate create Pod ss2-1 in StatefulSet ss2 successful (2 times)
Sep 09 08:23:45.196 W ns/e2e-resourcequota-1522 pod/terminating-pod node/ reason/GracefulDelete in 0s
Sep 09 08:23:45.221 W ns/e2e-resourcequota-1522 pod/terminating-pod node/ reason/Deleted
Sep 09 08:23:45.303 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:23:45.423 W ns/e2e-resourcequota-1522 pod/terminating-pod reason/FailedScheduling skip schedule deleting pod: e2e-resourcequota-1522/terminating-pod
Sep 09 08:23:46.235 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/FailedMount MountVolume.SetUp failed for volume "default-token-p6nf9" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:23:47.295 W ns/e2e-subpath-2103 pod/pod-subpath-test-configmap-8xrd node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:23:47.901 W ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:23:47.901 W ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 container/dapi-container reason/NotReady
Sep 09 08:23:48.968 W ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:23:49.618 I ns/e2e-projected-6946 pod/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b node/ reason/Created
Sep 09 08:23:49.882 I ns/e2e-projected-6946 pod/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:23:52.383 I ns/e2e-webhook-5914 deployment/sample-webhook-deployment reason/ScalingReplicaSet Scaled up replica set sample-webhook-deployment-7bc8486f8c to 1
Sep 09 08:23:52.610 I ns/e2e-var-expansion-1103 pod/var-expansion-d6e7d4f7-ae77-4cce-8711-00ea52bd308b node/ reason/Created
Sep 09 08:23:52.881 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz reason/AddedInterface Add eth0 [10.128.147.91/23]
Sep 09 08:23:52.935 I ns/e2e-var-expansion-1103 pod/var-expansion-d6e7d4f7-ae77-4cce-8711-00ea52bd308b node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:23:52.960 I ns/e2e-kubectl-1664 pod/agnhost-primary-c97587cb5-qpsjn reason/AddedInterface Add eth0 [10.128.146.14/23]
Sep 09 08:23:52.961 W ns/e2e-subpath-2103 pod/pod-subpath-test-configmap-8xrd node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:23:53.153 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-5bzgd reason/AddedInterface Add eth0 [10.128.147.244/23]
Sep 09 08:23:53.155 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-lr86x reason/AddedInterface Add eth0 [10.128.147.43/23]
Sep 09 08:23:53.174 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-qskln reason/AddedInterface Add eth0 [10.128.146.68/23]
Sep 09 08:23:53.197 I ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ reason/Created
Sep 09 08:23:53.285 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf reason/AddedInterface Add eth0 [10.128.147.39/23]
Sep 09 08:23:53.426 I ns/e2e-webhook-5914 replicaset/sample-webhook-deployment-7bc8486f8c reason/SuccessfulCreate Created pod: sample-webhook-deployment-7bc8486f8c-cl8gs
Sep 09 04:23:53.552 - 339s  I test="[sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" running
Sep 09 08:23:53.817 I ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:23:53.817 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:23:53.882 I ns/e2e-kubectl-1664 pod/agnhost-primary-c97587cb5-qpsjn node/ostest-5xqm8-worker-0-cbbx9 container/primary reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:23:53.941 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-qskln node/ostest-5xqm8-worker-0-twrlr container/guestbook-frontend reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:23:54.026 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Created
Sep 09 08:23:54.112 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-lr86x node/ostest-5xqm8-worker-0-cbbx9 container/guestbook-frontend reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:23:54.148 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Started
Sep 09 08:23:54.251 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-5bzgd node/ostest-5xqm8-worker-0-rzx47 container/guestbook-frontend reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:23:54.271 I ns/e2e-kubectl-1664 pod/agnhost-primary-c97587cb5-qpsjn node/ostest-5xqm8-worker-0-cbbx9 container/primary reason/Created
Sep 09 08:23:54.327 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-qskln node/ostest-5xqm8-worker-0-twrlr container/guestbook-frontend reason/Created
Sep 09 08:23:54.374 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:23:54.447 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-qskln node/ostest-5xqm8-worker-0-twrlr container/guestbook-frontend reason/Started
Sep 09 08:23:54.447 I ns/e2e-kubectl-1664 pod/agnhost-primary-c97587cb5-qpsjn node/ostest-5xqm8-worker-0-cbbx9 container/primary reason/Started
Sep 09 08:23:54.453 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-5bzgd node/ostest-5xqm8-worker-0-rzx47 container/guestbook-frontend reason/Created
Sep 09 08:23:54.494 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-5bzgd node/ostest-5xqm8-worker-0-rzx47 container/guestbook-frontend reason/Started
Sep 09 08:23:54.553 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-lr86x node/ostest-5xqm8-worker-0-cbbx9 container/guestbook-frontend reason/Created
Sep 09 08:23:54.672 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(7d7ce8a127972514d57dc2c34b171dfce5d0a184594482a3d3d0eca525eb7ead): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500  (5 times)
Sep 09 08:23:54.685 I ns/e2e-kubectl-1664 pod/agnhost-primary-c97587cb5-qpsjn node/ostest-5xqm8-worker-0-cbbx9 container/primary reason/Ready
Sep 09 08:23:54.785 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-lr86x node/ostest-5xqm8-worker-0-cbbx9 container/guestbook-frontend reason/Started
Sep 09 08:23:54.844 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Created
Sep 09 08:23:54.902 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Started
Sep 09 08:23:54.946 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-5bzgd node/ostest-5xqm8-worker-0-rzx47 container/guestbook-frontend reason/Ready
Sep 09 08:23:55.051 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Ready
Sep 09 08:23:55.270 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-qskln node/ostest-5xqm8-worker-0-twrlr container/guestbook-frontend reason/Ready
Sep 09 08:23:55.711 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-lr86x node/ostest-5xqm8-worker-0-cbbx9 container/guestbook-frontend reason/Ready
Sep 09 08:23:55.810 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Ready
Sep 09 08:23:55.911 I ns/e2e-downward-api-2833 pod/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c node/ reason/Created
Sep 09 08:23:56.080 I ns/e2e-downward-api-2833 pod/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:23:57.356 W ns/e2e-var-expansion-2659 pod/var-expansion-6456049c-7403-460a-8e4f-51ee62b41c23 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:23:57.906 I ns/e2e-statefulset-5052 pod/ss2-1 reason/AddedInterface Add eth0 [10.128.124.18/23]
Sep 09 08:23:58.092 I ns/e2e-downward-api-3425 pod/downwardapi-volume-8001c825-592e-4c93-84c1-c6b2671cf6c2 reason/AddedInterface Add eth0 [10.128.121.191/23]
Sep 09 04:23:58.117 - 327s  I test="[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] [sig-node] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" running
Sep 09 08:23:59.988 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Pulling image/docker.io/library/httpd:2.4.39-alpine
Sep 09 08:24:00.023 I ns/e2e-downward-api-3425 pod/downwardapi-volume-8001c825-592e-4c93-84c1-c6b2671cf6c2 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:24:00.208 I ns/e2e-downward-api-3425 pod/downwardapi-volume-8001c825-592e-4c93-84c1-c6b2671cf6c2 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Created
Sep 09 08:24:00.360 I ns/e2e-downward-api-3425 pod/downwardapi-volume-8001c825-592e-4c93-84c1-c6b2671cf6c2 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Started
Sep 09 08:24:00.667 W ns/e2e-downward-api-3425 pod/downwardapi-volume-8001c825-592e-4c93-84c1-c6b2671cf6c2 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:24:01.299 I ns/e2e-pods-6560 pod/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e node/ reason/Created
Sep 09 08:24:01.382 I ns/e2e-pods-6560 pod/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:24:01.930 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 reason/AddedInterface Add eth0 [10.128.132.87/23]
Sep 09 08:24:02.524 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Pulled image/docker.io/library/nginx:1.14-alpine
Sep 09 08:24:02.798 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Created
Sep 09 08:24:02.928 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Started
Sep 09 08:24:08.968 E kube-apiserver Kube API started failing: Get https://api.ostest.shiftstack.com:6443/api/v1/namespaces/kube-system?timeout=5s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 09 08:24:08.968 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ostest.shiftstack.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=5s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 09 08:24:08.968 I oauth-apiserver OAuth API stopped responding to GET requests: Get https://api.ostest.shiftstack.com:6443/apis/oauth.openshift.io/v1/oauthaccesstokens/missing?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 09 08:24:09.797 W ns/e2e-container-probe-9635 pod/test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:24:14.839 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Ready
Sep 09 08:24:14.901 E ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica container exited with code 1 (Error): 
Sep 09 08:24:15.000 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Ready
Sep 09 08:24:15.000 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Restarted
Sep 09 08:24:18.968 - 30s   E kube-apiserver Kube API is not responding to GET requests
Sep 09 08:24:18.968 - 30s   E oauth-apiserver OAuth API is not responding to GET requests
Sep 09 08:24:18.968 - 30s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 09 08:24:20.964 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/NotReady
Sep 09 08:24:20.964 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Restarted
Sep 09 08:24:21.311 W clusteroperator/network changed Progressing to True: Deploying: Deployment "openshift-kuryr/kuryr-controller" is not available (awaiting 1 nodes)
Sep 09 08:24:32.896 E ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica container exited with code 1 (Error): 
Sep 09 08:24:33.471 W ns/e2e-container-probe-9635 pod/test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:24:33.471 W ns/e2e-container-probe-9635 pod/test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de node/ostest-5xqm8-worker-0-rzx47 container/test-webserver reason/NotReady
Sep 09 08:24:33.544 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:24:33.615 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:24:33.639 E ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica container exited with code 1 (Error): 
Sep 09 08:24:33.639 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Restarted
Sep 09 08:24:33.968 - 75s   W ns/e2e-container-probe-9635 pod/test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:24:34.920 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:24:34.920 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/NotReady
Sep 09 08:24:42.279 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver reason/NotReady
Sep 09 08:24:44.281 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Ready
Sep 09 08:24:44.281 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Restarted
Sep 09 08:24:48.646 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints reason/NotReady
Sep 09 08:24:48.968 - 60s   W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:24:48.980 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/NotReady
Sep 09 08:24:48.980 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Restarted
Sep 09 08:24:49.053 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Ready
Sep 09 08:24:49.053 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Restarted
Sep 09 08:24:51.546 I openshift-apiserver OpenShift API started responding to GET requests
Sep 09 08:24:51.546 I oauth-apiserver OAuth API started responding to GET requests
Sep 09 08:24:51.549 I kube-apiserver Kube API started responding to GET requests
Sep 09 08:24:51.618 I ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver reason/Ready
Sep 09 08:24:51.667 I ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints reason/Ready
Sep 09 08:24:52.125 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 reason/DeadlineExceeded Pod was active on the node longer than the specified deadline
Sep 09 08:24:52.142 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Killing
Sep 09 08:24:52.149 E ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (DeadlineExceeded): Pod was active on the node longer than the specified deadline
Sep 09 08:24:52.149 W ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/NotReady
Sep 09 08:24:53.021 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 reason/DeadlineExceeded Pod was active on the node longer than the specified deadline (2 times)
Sep 09 08:24:54.021 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 reason/DeadlineExceeded Pod was active on the node longer than the specified deadline (3 times)
Sep 09 08:24:56.129 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver\" is not ready: unknown reason" to "NodeControllerDegraded: All master nodes are ready" (2 times)
Sep 09 08:24:57.442 I ns/e2e-kubectl-5600 pod/agnhost-primary-tzkl5 node/ reason/Created
Sep 09 08:24:57.487 I ns/e2e-kubectl-5600 replicationcontroller/agnhost-primary reason/SuccessfulCreate Created pod: agnhost-primary-tzkl5
Sep 09 08:24:57.548 I ns/e2e-kubectl-5600 pod/agnhost-primary-tzkl5 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:25:02.339 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 reason/BackOff Back-off restarting failed container (2 times)
Sep 09 08:25:02.350 E ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica container exited with code 1 (Error): 
Sep 09 08:25:03.650 W ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:25:03.968 - 120s  W ns/e2e-var-expansion-1103 pod/var-expansion-d6e7d4f7-ae77-4cce-8711-00ea52bd308b node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:25:03.968 - 195s  W ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:25:03.968 - 195s  W ns/e2e-projected-6946 pod/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:25:03.968 - 254s  W ns/e2e-downward-api-2833 pod/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c node/ostest-5xqm8-worker-0-twrlr pod has been pending longer than a minute
Sep 09 08:25:03.968 - 269s  W ns/e2e-pods-6560 pod/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:25:06.932 I ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Ready
Sep 09 08:25:07.098 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 reason/BackOff Back-off restarting failed container (2 times)
Sep 09 08:25:07.136 E ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica container exited with code 1 (Error): 
Sep 09 08:25:12.805 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(d7513e4362cf1af9ad7d85c6044b8e7f6f5555d11248275928d8000d30524bcf): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500  (8 times)
Sep 09 08:25:17.770 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 reason/BackOff Back-off restarting failed container (3 times)
Sep 09 08:25:18.196 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 reason/BackOff Back-off restarting failed container (3 times)
Sep 09 08:25:25.330 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": dial tcp 10.196.3.65:8091: connect: connection refused (48 times)
Sep 09 08:25:29.206 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:25:29.527 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Created
Sep 09 08:25:29.573 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Started
Sep 09 08:25:29.777 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:25:30.031 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Created
Sep 09 08:25:30.082 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Started
Sep 09 08:25:30.243 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Ready
Sep 09 08:25:30.243 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Restarted
Sep 09 08:25:30.483 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Ready
Sep 09 08:25:30.483 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Restarted
Sep 09 08:25:34.578 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(6a6c1129a637dd39a5edec97be83e46ffe8a1ebd41ecd6809c8ddcd3f3fb14d9): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500  (9 times)
Sep 09 08:25:42.568 W ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:25:45.919 I ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Ready
Sep 09 08:25:46.159 W clusteroperator/network changed Progressing to False
Sep 09 08:25:46.604 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 reason/DeadlineExceeded Pod was active on the node longer than the specified deadline
Sep 09 08:25:46.604 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 container/nginx reason/Killing
Sep 09 08:25:46.605 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 reason/DeadlineExceeded Pod was active on the node longer than the specified deadline (2 times)
Sep 09 08:25:46.605 I ns/e2e-pods-7077 pod/pod-update-activedeadlineseconds-3ae57774-b8af-42dc-ab35-8976a01ad967 node/ostest-5xqm8-worker-0-cbbx9 reason/DeadlineExceeded Pod was active on the node longer than the specified deadline (3 times)
Sep 09 08:25:46.605 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver\" is not ready: unknown reason" to "NodeControllerDegraded: All master nodes are ready" (2 times)
Sep 09 08:25:46.607 I ns/e2e-kubectl-5600 replicationcontroller/agnhost-primary reason/SuccessfulCreate Created pod: agnhost-primary-tzkl5
Sep 09 08:25:46.607 I ns/e2e-kubectl-5600 pod/agnhost-primary-tzkl5 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:25:46.613 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 reason/BackOff Back-off restarting failed container (2 times)
Sep 09 08:25:46.613 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 reason/BackOff Back-off restarting failed container (2 times)
Sep 09 08:25:46.613 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(d7513e4362cf1af9ad7d85c6044b8e7f6f5555d11248275928d8000d30524bcf): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500  (8 times)
Sep 09 08:25:46.622 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 reason/BackOff Back-off restarting failed container (3 times)
Sep 09 08:25:46.622 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 reason/BackOff Back-off restarting failed container (3 times)
Sep 09 08:25:46.622 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": dial tcp 10.196.3.65:8091: connect: connection refused (48 times)
Sep 09 08:25:46.623 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:25:46.623 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Created
Sep 09 08:25:46.623 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/Started
Sep 09 08:25:46.623 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:25:46.623 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Created
Sep 09 08:25:46.623 I ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/Started
Sep 09 08:25:46.623 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(6a6c1129a637dd39a5edec97be83e46ffe8a1ebd41ecd6809c8ddcd3f3fb14d9): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500  (9 times)
Sep 09 08:25:46.625 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: failed to establish a TCP connection to 10.196.3.65:6443: dial tcp 10.196.3.65:6443: connect: connection refused (7 times)
Sep 09 08:25:46.973 I ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbbdc954cefdfc297e03ef3b9b36211b8f7378378a89e071585e70b1121161d2
Sep 09 08:25:47.379 I ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver reason/Created
Sep 09 08:25:47.570 I ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver reason/Started
Sep 09 08:25:47.772 I ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints reason/Killing
Sep 09 08:25:47.842 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver reason/NotReady
Sep 09 08:25:47.842 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver reason/Restarted
Sep 09 08:25:48.139 I ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e63014ee1a7bd4f1afa20cb684536a3151e5799764ba585da390d00f003be350
Sep 09 08:25:48.459 W ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-5bzgd node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:25:48.483 W ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-lr86x node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:25:48.502 W ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-qskln node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:25:48.551 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-5bzgd node/ostest-5xqm8-worker-0-rzx47 container/guestbook-frontend reason/Killing
Sep 09 08:25:48.551 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-qskln node/ostest-5xqm8-worker-0-twrlr container/guestbook-frontend reason/Killing
Sep 09 08:25:48.567 I ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-lr86x node/ostest-5xqm8-worker-0-cbbx9 container/guestbook-frontend reason/Killing
Sep 09 08:25:48.697 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 reason/Unhealthy Readiness probe failed: Get "https://10.196.3.65:17697/healthz": dial tcp 10.196.3.65:17697: connect: connection refused (4 times)
Sep 09 08:25:48.951 I ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints reason/Created
Sep 09 08:25:49.316 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints reason/NotReady
Sep 09 08:25:49.316 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints reason/Restarted
Sep 09 08:25:49.815 W ns/e2e-kubectl-1664 pod/agnhost-primary-c97587cb5-qpsjn node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:25:50.258 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:25:50.337 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:25:50.374 W ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-qskln node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:25:50.374 W ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-qskln node/ostest-5xqm8-worker-0-twrlr container/guestbook-frontend reason/NotReady
Sep 09 08:25:50.431 W ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-lr86x node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:25:50.431 W ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-lr86x node/ostest-5xqm8-worker-0-cbbx9 container/guestbook-frontend reason/NotReady
Sep 09 08:25:50.712 W ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-5bzgd node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:25:50.712 W ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-5bzgd node/ostest-5xqm8-worker-0-rzx47 container/guestbook-frontend reason/NotReady
Sep 09 08:25:51.403 W ns/e2e-kubectl-1664 pod/agnhost-primary-c97587cb5-qpsjn node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:25:51.403 W ns/e2e-kubectl-1664 pod/agnhost-primary-c97587cb5-qpsjn node/ostest-5xqm8-worker-0-cbbx9 container/primary reason/NotReady
Sep 09 08:25:51.664 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.55.100:443: dial tcp 172.30.55.100:443: connect: connection refused (31 times)
Sep 09 08:25:51.732 I ns/openshift-apiserver deployment/apiserver reason/ConnectivityRestored Connectivity restored after 1.000724439s: kubernetes-apiserver-service-cluster: tcp connection to 172.30.55.100:443 succeeded
Sep 09 08:25:51.786 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.55.100:443: dial tcp 172.30.55.100:443: connect: connection refused (32 times)
Sep 09 08:25:51.871 I ns/openshift-apiserver deployment/apiserver reason/ConnectivityRestored Connectivity restored after 999.214365ms: kubernetes-apiserver-service-cluster: tcp connection to 172.30.55.100:443 succeeded
Sep 09 08:25:51.981 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver\" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: unknown reason" (2 times)
Sep 09 08:25:52.091 I ns/e2e-configmap-4017 pod/pod-configmaps-173e0eb0-182f-4d3b-9d28-3d4a62f94193 node/ reason/Created
Sep 09 08:25:52.187 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.55.100:443: dial tcp 172.30.55.100:443: connect: connection refused (30 times)
Sep 09 08:25:52.187 I ns/e2e-configmap-4017 pod/pod-configmaps-173e0eb0-182f-4d3b-9d28-3d4a62f94193 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:25:52.213 I ns/openshift-apiserver deployment/apiserver reason/ConnectivityRestored Connectivity restored after 1.000647537s: kubernetes-apiserver-service-cluster: tcp connection to 172.30.55.100:443 succeeded
Sep 09 08:25:52.254 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.55.100:443: dial tcp 172.30.55.100:443: connect: connection refused (31 times)
Sep 09 08:25:52.271 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: failed to establish a TCP connection to 10.196.3.65:6443: dial tcp 10.196.3.65:6443: connect: connection refused (7 times)
Sep 09 08:25:52.367 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 container/replica reason/NotReady
Sep 09 08:25:52.482 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: failed to establish a TCP connection to 10.196.3.65:6443: dial tcp 10.196.3.65:6443: connect: connection refused (7 times)
Sep 09 08:25:52.669 W ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-lr86x node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:25:52.708 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 container/replica reason/NotReady
Sep 09 08:25:52.760 W ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-qskln node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:25:52.940 W ns/e2e-var-expansion-1103 pod/var-expansion-d6e7d4f7-ae77-4cce-8711-00ea52bd308b node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:25:53.744 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-dk6tf node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:25:54.487 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.55.100:443: dial tcp 172.30.55.100:443: connect: connection refused (31 times)
Sep 09 08:25:54.507 I ns/openshift-apiserver deployment/apiserver reason/ConnectivityRestored Connectivity restored after 1.001197021s: kubernetes-apiserver-service-cluster: tcp connection to 172.30.55.100:443 succeeded
Sep 09 08:25:56.816 W ns/e2e-kubectl-1664 pod/agnhost-primary-c97587cb5-qpsjn node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:25:57.338 W ns/e2e-kubectl-1664 pod/frontend-7c7f745c7-5bzgd node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:25:58.162 W ns/e2e-kubectl-1664 pod/agnhost-replica-98d447897-l4hbz node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:25:58.436 W ns/e2e-container-probe-9635 pod/test-webserver-c8ffeba6-8b91-49c7-859c-315a3bd651de node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:25:58.618 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.55.100:443: dial tcp 172.30.55.100:443: connect: connection refused (33 times)
Sep 09 08:25:58.701 I ns/openshift-apiserver deployment/apiserver reason/ConnectivityRestored Connectivity restored after 1.002703546s: kubernetes-apiserver-service-cluster: tcp connection to 172.30.55.100:443 succeeded
Sep 09 08:25:59.102 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:25:59.180 I ns/openshift-apiserver deployment/apiserver reason/ConnectivityRestored Connectivity restored after 2.108313931s: kubernetes-apiserver-service-cluster: tcp connection to 172.30.55.100:443 succeeded
Sep 09 08:25:59.323 I ns/e2e-statefulset-5052 pod/ss2-0 node/ reason/Created
Sep 09 08:25:59.357 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulCreate create Pod ss2-0 in StatefulSet ss2 successful (2 times)
Sep 09 08:25:59.429 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:25:59.654 W ns/e2e-downward-api-3425 pod/downwardapi-volume-8001c825-592e-4c93-84c1-c6b2671cf6c2 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:26:00.089 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(b41dfdd2c3b867076205e8a429d9079d63a5c4ec6551d5db776f23b06f051bf7): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500  (10 times)
Sep 09 08:26:02.233 I ns/e2e-kubectl-7173 pod/e2e-test-httpd-pod node/ reason/Created
Sep 09 08:26:02.334 I ns/e2e-kubectl-7173 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:26:03.155 W ns/e2e-kubectl-7173 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:26:03.968 W ns/e2e-kubectl-5600 pod/agnhost-primary-tzkl5 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:26:04.364 I ns/e2e-statefulset-5052 pod/ss2-0 reason/AddedInterface Add eth0 [10.128.125.161/23]
Sep 09 08:26:05.036 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulling image/docker.io/library/httpd:2.4.39-alpine
Sep 09 08:26:07.230 W ns/e2e-kubectl-7173 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:26:08.848 I ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ reason/Created
Sep 09 08:26:09.051 I ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:26:12.678 I ns/e2e-kubectl-5600 pod/agnhost-primary-tzkl5 reason/AddedInterface Add eth0 [10.128.153.212/23]
Sep 09 08:26:13.298 I ns/e2e-kubectl-5600 pod/agnhost-primary-tzkl5 node/ostest-5xqm8-worker-0-rzx47 container/agnhost-primary reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:26:13.618 I ns/e2e-kubectl-5600 pod/agnhost-primary-tzkl5 node/ostest-5xqm8-worker-0-rzx47 container/agnhost-primary reason/Created
Sep 09 08:26:13.668 I ns/e2e-kubectl-5600 pod/agnhost-primary-tzkl5 node/ostest-5xqm8-worker-0-rzx47 container/agnhost-primary reason/Started
Sep 09 08:26:13.716 I ns/e2e-kubectl-5600 pod/agnhost-primary-tzkl5 node/ostest-5xqm8-worker-0-rzx47 container/agnhost-primary reason/Ready
Sep 09 08:26:19.183 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/docker.io/library/httpd:2.4.39-alpine
Sep 09 08:26:19.470 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:26:19.609 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:26:20.746 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 08:26:21.345 I ns/e2e-configmap-4017 pod/pod-configmaps-173e0eb0-182f-4d3b-9d28-3d4a62f94193 reason/AddedInterface Add eth0 [10.128.159.48/23]
Sep 09 08:26:22.061 I ns/e2e-configmap-4017 pod/pod-configmaps-173e0eb0-182f-4d3b-9d28-3d4a62f94193 node/ostest-5xqm8-worker-0-cbbx9 container/configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:26:22.114 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints reason/Restarted
Sep 09 08:26:22.345 I ns/e2e-configmap-4017 pod/pod-configmaps-173e0eb0-182f-4d3b-9d28-3d4a62f94193 node/ostest-5xqm8-worker-0-cbbx9 container/configmap-volume-test reason/Created
Sep 09 08:26:22.941 I ns/e2e-configmap-4017 pod/pod-configmaps-173e0eb0-182f-4d3b-9d28-3d4a62f94193 node/ostest-5xqm8-worker-0-cbbx9 container/configmap-volume-test reason/Started
Sep 09 08:26:24.054 W ns/e2e-kubectl-5600 pod/agnhost-primary-tzkl5 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:26:24.295 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(cfd81ad3e6aedc8290ece777ff822c6499fc779dbc489b9214c4cb1372482f7b): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500  (11 times)
Sep 09 08:26:24.964 W ns/e2e-configmap-4017 pod/pod-configmaps-173e0eb0-182f-4d3b-9d28-3d4a62f94193 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:26:25.836 W ns/e2e-kubectl-5600 pod/agnhost-primary-tzkl5 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:26:25.836 W ns/e2e-kubectl-5600 pod/agnhost-primary-tzkl5 node/ostest-5xqm8-worker-0-rzx47 container/agnhost-primary reason/NotReady
Sep 09 08:26:27.455 W ns/e2e-configmap-4017 pod/pod-configmaps-173e0eb0-182f-4d3b-9d28-3d4a62f94193 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:26:28.988 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404
Sep 09 08:26:29.040 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/NotReady
Sep 09 08:26:29.987 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (2 times)
Sep 09 08:26:30.924 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (3 times)
Sep 09 08:26:31.162 I ns/e2e-init-container-1915 pod/pod-init-fd7963de-2599-4d33-9a86-13cfed42b959 node/ reason/Created
Sep 09 08:26:31.224 I ns/e2e-init-container-1915 pod/pod-init-fd7963de-2599-4d33-9a86-13cfed42b959 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:26:31.919 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (4 times)
Sep 09 08:26:32.405 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (54 times)
Sep 09 08:26:32.919 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (5 times)
Sep 09 08:26:33.909 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (6 times)
Sep 09 08:26:33.968 W ns/e2e-kubectl-5600 pod/agnhost-primary-tzkl5 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:26:34.929 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (7 times)
Sep 09 08:26:35.308 I ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed reason/AddedInterface Add eth0 [10.128.130.126/23]
Sep 09 08:26:35.927 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (8 times)
Sep 09 08:26:36.168 I ns/e2e-downward-api-7567 pod/downwardapi-volume-efeee222-b835-4fd9-83dc-46c7f5cf39f4 node/ reason/Created
Sep 09 08:26:36.215 I ns/e2e-downward-api-7567 pod/downwardapi-volume-efeee222-b835-4fd9-83dc-46c7f5cf39f4 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:26:36.915 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (9 times)
Sep 09 08:26:37.288 W ns/e2e-kubectl-5600 pod/agnhost-primary-tzkl5 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:26:37.919 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (10 times)
Sep 09 08:26:38.992 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (11 times)
Sep 09 08:26:39.937 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (12 times)
Sep 09 08:26:41.059 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (13 times)
Sep 09 08:26:41.259 I ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 container/init1 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:26:41.924 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (14 times)
Sep 09 08:26:42.809 I ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 container/init1 reason/Created
Sep 09 08:26:42.855 I ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 container/init1 reason/Started
Sep 09 08:26:42.908 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (15 times)
Sep 09 08:26:43.846 I ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 container/init2 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:26:43.927 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (16 times)
Sep 09 08:26:44.914 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (17 times)
Sep 09 08:26:45.211 I ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 container/init2 reason/Created
Sep 09 08:26:45.401 I ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 container/init2 reason/Started
Sep 09 08:26:45.845 I ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 container/run1 reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:26:45.974 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (18 times)
Sep 09 08:26:47.080 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (19 times)
Sep 09 08:26:47.493 I ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 container/run1 reason/Created
Sep 09 08:26:47.746 I ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 container/run1 reason/Started
Sep 09 08:26:47.925 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (20 times)
Sep 09 08:26:47.974 I ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 container/run1 reason/Ready
Sep 09 04:26:48.075 - 258s  I test="[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" running
Sep 09 08:26:48.710 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(c6999f0a3ef1b4a6ad7307c9dcea53905ee2c22d7afe57d672be4a31d3f6660c): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500  (12 times)
Sep 09 08:26:49.169 I ns/e2e-services-252 pod/kube-proxy-mode-detector node/ reason/Created
Sep 09 08:26:49.246 I ns/e2e-services-252 pod/kube-proxy-mode-detector node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:26:49.962 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:26:50.001 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:26:50.026 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulDelete delete Pod ss2-2 in StatefulSet ss2 successful (2 times)
Sep 09 08:26:50.026 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Killing
Sep 09 08:26:50.323 I ns/e2e-services-252 pod/kube-proxy-mode-detector node/ostest-5xqm8-worker-0-rzx47 container/detector reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:26:50.548 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: Get "http://10.128.125.95:80/index.html": dial tcp 10.128.125.95:80: connect: connection refused
Sep 09 08:26:50.584 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/NotReady
Sep 09 08:26:50.704 I ns/e2e-services-252 pod/kube-proxy-mode-detector node/ostest-5xqm8-worker-0-rzx47 container/detector reason/Created
Sep 09 08:26:50.780 I ns/e2e-services-252 pod/kube-proxy-mode-detector node/ostest-5xqm8-worker-0-rzx47 container/detector reason/Started
Sep 09 08:26:50.887 I ns/e2e-services-252 pod/kube-proxy-mode-detector node/ostest-5xqm8-worker-0-rzx47 container/detector reason/Ready
Sep 09 08:26:51.513 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:26:51.668 W ns/e2e-services-252 pod/kube-proxy-mode-detector node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:26:52.654 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:26:52.850 I ns/e2e-statefulset-5052 pod/ss2-2 node/ reason/Created
Sep 09 08:26:52.868 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulCreate create Pod ss2-2 in StatefulSet ss2 successful (3 times)
Sep 09 08:26:52.917 I ns/e2e-services-252 pod/kube-proxy-mode-detector node/ostest-5xqm8-worker-0-rzx47 container/detector reason/Killing
Sep 09 08:26:52.932 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:26:53.939 E ns/e2e-services-252 pod/kube-proxy-mode-detector node/ostest-5xqm8-worker-0-rzx47 container/detector container exited with code 2 (Error): 
Sep 09 08:26:54.522 E ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints container exited with code 255 (Error): 32\n\ngoroutine 1086 [select]:\nk8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000e68d80)\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\ncreated by k8s.io/client-go/util/workqueue.newDelayingQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\n\ngoroutine 1100 [chan receive]:\nk8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000f51080)\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\ncreated by k8s.io/client-go/util/workqueue.newQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\n\ngoroutine 1102 [select]:\nk8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000f512c0)\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\ncreated by k8s.io/client-go/util/workqueue.newDelayingQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\n\ngoroutine 1109 [chan receive]:\nk8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000f515c0)\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\ncreated by k8s.io/client-go/util/workqueue.newQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\n\ngoroutine 1111 [select]:\nk8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000f517a0)\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\ncreated by k8s.io/client-go/util/workqueue.newDelayingQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\n\ngoroutine 1118 [chan receive]:\nk8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000f51aa0)\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\ncreated by k8s.io/client-go/util/workqueue.newQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\n\ngoroutine 1120 [select]:\nk8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000f51c80)\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\ncreated by k8s.io/client-go/util/workqueue.newDelayingQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\n
Sep 09 08:26:58.181 I ns/e2e-statefulset-5052 pod/ss2-2 reason/AddedInterface Add eth0 [10.128.125.95/23]
Sep 09 08:26:58.295 I ns/e2e-init-container-1915 pod/pod-init-fd7963de-2599-4d33-9a86-13cfed42b959 reason/AddedInterface Add eth0 [10.128.143.212/23]
Sep 09 08:26:58.348 W ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:26:58.978 I ns/e2e-init-container-1915 pod/pod-init-fd7963de-2599-4d33-9a86-13cfed42b959 node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:26:58.999 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:26:59.247 I ns/e2e-init-container-1915 pod/pod-init-fd7963de-2599-4d33-9a86-13cfed42b959 node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Created
Sep 09 08:26:59.315 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Created
Sep 09 08:26:59.378 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Started
Sep 09 08:26:59.429 I ns/e2e-init-container-1915 pod/pod-init-fd7963de-2599-4d33-9a86-13cfed42b959 node/ostest-5xqm8-worker-0-cbbx9 container/init1 reason/Started
Sep 09 08:26:59.706 I ns/e2e-init-container-1915 pod/pod-init-fd7963de-2599-4d33-9a86-13cfed42b959 node/ostest-5xqm8-worker-0-cbbx9 container/init2 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:26:59.823 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Ready
Sep 09 08:26:59.886 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:26:59.930 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulDelete delete Pod ss2-1 in StatefulSet ss2 successful (2 times)
Sep 09 08:27:00.013 I ns/e2e-init-container-1915 pod/pod-init-fd7963de-2599-4d33-9a86-13cfed42b959 node/ostest-5xqm8-worker-0-cbbx9 container/init2 reason/Created
Sep 09 08:27:00.013 W ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:27:00.013 W ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 container/run1 reason/NotReady
Sep 09 08:27:00.128 I ns/e2e-init-container-1915 pod/pod-init-fd7963de-2599-4d33-9a86-13cfed42b959 node/ostest-5xqm8-worker-0-cbbx9 container/init2 reason/Started
Sep 09 08:27:00.727 I ns/e2e-init-container-1915 pod/pod-init-fd7963de-2599-4d33-9a86-13cfed42b959 node/ostest-5xqm8-worker-0-cbbx9 container/run1 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:27:01.069 I ns/e2e-init-container-1915 pod/pod-init-fd7963de-2599-4d33-9a86-13cfed42b959 node/ostest-5xqm8-worker-0-cbbx9 container/run1 reason/Created
Sep 09 08:27:01.134 I ns/e2e-init-container-1915 pod/pod-init-fd7963de-2599-4d33-9a86-13cfed42b959 node/ostest-5xqm8-worker-0-cbbx9 container/run1 reason/Started
Sep 09 08:27:01.209 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver\" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: unknown reason" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver\" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)"
Sep 09 08:27:01.263 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver\" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: unknown reason" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver\" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)" (2 times)
Sep 09 08:27:03.063 I ns/e2e-projected-1848 pod/downwardapi-volume-993610de-c735-488b-ba90-1fe9ceb041e6 node/ reason/Created
Sep 09 08:27:03.211 I ns/e2e-projected-1848 pod/downwardapi-volume-993610de-c735-488b-ba90-1fe9ceb041e6 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:27:05.682 W ns/e2e-downward-api-2833 pod/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(01362b0c9088dfd665a3a094a09cfec7659db7752a3c3b2744043b1ff726bf33): netplugin failed: "2020/09/09 08:23:56 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-downward-api-2833;K8S_POD_NAME=downward-api-61172db3-3d96-4583-904f-a2283b9cd03c;K8S_POD_INFRA_CONTAINER_ID=01362b0c9088dfd665a3a094a09cfec7659db7752a3c3b2744043b1ff726bf33, CNI_NETNS=/var/run/netns/b180b462-d971-40d4-9f0c-5880faca044e).\n"
Sep 09 08:27:05.843 W ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-cl8gs_e2e-webhook-5914_e8aad72c-b47e-4bdb-8d68-c0f42ea31d83_0(11a2727104ef7c5bf2a0eabd12cada07b33ed5a6864d971e527db87ad820eb57): netplugin failed: "2020/09/09 08:25:00 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-webhook-5914;K8S_POD_NAME=sample-webhook-deployment-7bc8486f8c-cl8gs;K8S_POD_INFRA_CONTAINER_ID=11a2727104ef7c5bf2a0eabd12cada07b33ed5a6864d971e527db87ad820eb57, CNI_NETNS=/var/run/netns/8f8a4563-1c33-4e93-be14-e3221681171e).\n"
Sep 09 08:27:07.250 W ns/e2e-init-container-9399 pod/pod-init-77a79d55-7049-4940-9c87-a98957c0e9ed node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:27:07.333 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:27:07.391 I ns/e2e-statefulset-5052 pod/ss2-1 node/ reason/Created
Sep 09 08:27:07.423 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulCreate create Pod ss2-1 in StatefulSet ss2 successful (3 times)
Sep 09 08:27:07.471 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:27:08.419 W ns/e2e-var-expansion-1103 pod/var-expansion-d6e7d4f7-ae77-4cce-8711-00ea52bd308b node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:27:09.433 W ns/e2e-init-container-1915 pod/pod-init-fd7963de-2599-4d33-9a86-13cfed42b959 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:27:09.921 W ns/e2e-pods-6560 pod/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e_e2e-pods-6560_61187a01-8366-45cc-b2fc-9f3c00eb71a0_0(c1f62085d54b8fbb278bed495f80f6686c5fb4e5bd5169bb074212eae02d32b9): netplugin failed: "2020/09/09 08:24:01 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-pods-6560;K8S_POD_NAME=server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e;K8S_POD_INFRA_CONTAINER_ID=c1f62085d54b8fbb278bed495f80f6686c5fb4e5bd5169bb074212eae02d32b9, CNI_NETNS=/var/run/netns/2e1c601a-c252-4d3d-a591-7ac8e4606573).\n"
Sep 09 08:27:10.658 I ns/e2e-statefulset-9097 pod/ss-0 node/ reason/Created
Sep 09 08:27:10.684 I ns/e2e-statefulset-9097 statefulset/ss reason/SuccessfulCreate create Pod ss-0 in StatefulSet ss successful
Sep 09 08:27:10.746 I ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:27:10.811 W ns/e2e-projected-6946 pod/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b_e2e-projected-6946_e7e57198-eba3-44fb-b353-46429cd0d40b_0(97ae6cbd138f2d8cd3b9655552d30a9bc3e568ad2a16c4a27a17de601788f8fd): netplugin failed: "2020/09/09 08:25:01 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-projected-6946;K8S_POD_NAME=pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b;K8S_POD_INFRA_CONTAINER_ID=97ae6cbd138f2d8cd3b9655552d30a9bc3e568ad2a16c4a27a17de601788f8fd, CNI_NETNS=/var/run/netns/f0251a53-77fe-45f0-84d7-d9ea11b5fbae).\n"
Sep 09 08:27:12.669 W ns/e2e-init-container-1915 pod/pod-init-fd7963de-2599-4d33-9a86-13cfed42b959 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:27:13.473 I ns/e2e-statefulset-5052 pod/ss2-1 reason/AddedInterface Add eth0 [10.128.124.18/23]
Sep 09 08:27:13.750 W ns/e2e-services-252 pod/kube-proxy-mode-detector node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:27:14.150 I ns/e2e-downward-api-7567 pod/downwardapi-volume-efeee222-b835-4fd9-83dc-46c7f5cf39f4 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:27:14.228 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:27:14.464 I ns/e2e-downward-api-7567 pod/downwardapi-volume-efeee222-b835-4fd9-83dc-46c7f5cf39f4 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Created
Sep 09 08:27:14.509 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Created
Sep 09 08:27:14.550 I ns/e2e-downward-api-7567 pod/downwardapi-volume-efeee222-b835-4fd9-83dc-46c7f5cf39f4 node/ostest-5xqm8-worker-0-cbbx9 container/client-container reason/Started
Sep 09 08:27:14.616 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Started
Sep 09 08:27:14.697 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(153779b1843f35c4cac131592772d3a1550de22672310f2357d42acc06061582): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500  (13 times)
Sep 09 08:27:15.209 W ns/e2e-downward-api-7567 pod/downwardapi-volume-efeee222-b835-4fd9-83dc-46c7f5cf39f4 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:27:15.349 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:27:15.438 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Killing
Sep 09 08:27:15.440 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:27:15.481 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulDelete delete Pod ss2-0 in StatefulSet ss2 successful (2 times)
Sep 09 08:27:15.658 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.128.125.161:80/index.html": dial tcp 10.128.125.161:80: connect: connection refused
Sep 09 08:27:15.700 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/NotReady
Sep 09 08:27:16.135 I ns/e2e-services-252 pod/affinity-clusterip-timeout-w9fc2 node/ reason/Created
Sep 09 08:27:16.328 I ns/e2e-services-252 replicationcontroller/affinity-clusterip-timeout reason/SuccessfulCreate Created pod: affinity-clusterip-timeout-w9fc2
Sep 09 08:27:16.328 I ns/e2e-services-252 pod/affinity-clusterip-timeout-w9fc2 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:27:16.447 I ns/e2e-services-252 pod/affinity-clusterip-timeout-7bs6z node/ reason/Created
Sep 09 08:27:16.458 I ns/e2e-services-252 pod/affinity-clusterip-timeout-j28fn node/ reason/Created
Sep 09 08:27:16.504 I ns/e2e-services-252 replicationcontroller/affinity-clusterip-timeout reason/SuccessfulCreate Created pod: affinity-clusterip-timeout-7bs6z
Sep 09 08:27:16.666 I ns/e2e-services-252 replicationcontroller/affinity-clusterip-timeout reason/SuccessfulCreate Created pod: affinity-clusterip-timeout-j28fn
Sep 09 08:27:16.767 I ns/e2e-services-252 pod/affinity-clusterip-timeout-7bs6z node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:27:16.805 I ns/e2e-services-252 pod/affinity-clusterip-timeout-j28fn node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:27:16.885 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:27:17.463 W ns/e2e-services-252 pod/affinity-clusterip-timeout-w9fc2 node/ostest-5xqm8-worker-0-rzx47 reason/FailedMount MountVolume.SetUp failed for volume "default-token-glj5z" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:27:17.691 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.128.125.161:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 09 08:27:18.969 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:27:20.101 W ns/e2e-downward-api-7567 pod/downwardapi-volume-efeee222-b835-4fd9-83dc-46c7f5cf39f4 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:27:22.509 I ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ reason/Created
Sep 09 08:27:22.567 I ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:27:23.700 I ns/e2e-services-252 pod/affinity-clusterip-timeout-w9fc2 reason/AddedInterface Add eth0 [10.128.149.212/23]
Sep 09 08:27:24.242 I ns/openshift-apiserver deployment/apiserver reason/ConnectivityRestored Connectivity restored after 1m28.108991938s: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: tcp connection to 10.196.3.65:6443 succeeded
Sep 09 08:27:24.242 I ns/openshift-apiserver deployment/apiserver reason/ConnectivityRestored Connectivity restored after 1m29.445271115s: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: tcp connection to 10.196.3.65:6443 succeeded
Sep 09 08:27:24.265 I ns/openshift-apiserver deployment/apiserver reason/ConnectivityRestored Connectivity restored after 1m29.577564777s: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: tcp connection to 10.196.3.65:6443 succeeded
Sep 09 08:27:24.334 I ns/e2e-services-252 pod/affinity-clusterip-timeout-w9fc2 node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip-timeout reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:27:24.637 I ns/e2e-services-252 pod/affinity-clusterip-timeout-w9fc2 node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip-timeout reason/Created
Sep 09 08:27:24.679 I ns/e2e-services-252 pod/affinity-clusterip-timeout-w9fc2 node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip-timeout reason/Started
Sep 09 08:27:25.077 I ns/e2e-services-252 pod/affinity-clusterip-timeout-w9fc2 node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip-timeout reason/Ready
Sep 09 08:27:26.813 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:27:26.905 I ns/e2e-statefulset-5052 pod/ss2-0 node/ reason/Created
Sep 09 08:27:26.961 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:27:26.961 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulCreate create Pod ss2-0 in StatefulSet ss2 successful (3 times)
Sep 09 08:27:28.394 W ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-cl8gs_e2e-webhook-5914_e8aad72c-b47e-4bdb-8d68-c0f42ea31d83_0(d47db242bb5a89e1bf485c546ac34a8803bd8e25e9baf7a904b18ad87006313c): [e2e-webhook-5914/sample-webhook-deployment-7bc8486f8c-cl8gs:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:27:30.125 W ns/e2e-downward-api-2833 pod/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(71f00f3bdb1bce436bf14c447e476e3745d8a6a3b81f6e6b7c29b33184762f43): [e2e-downward-api-2833/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:27:30.882 W ns/e2e-pods-6560 pod/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e_e2e-pods-6560_61187a01-8366-45cc-b2fc-9f3c00eb71a0_0(7c2170db2637f97adc50fc3a7aa2b84a359732d53c9085f1c3c35c2f32cd4f62): [e2e-pods-6560/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:27:32.044 I ns/e2e-statefulset-5052 pod/ss2-0 reason/AddedInterface Add eth0 [10.128.125.161/23]
Sep 09 08:27:32.273 I ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver reason/Ready
Sep 09 08:27:32.525 W ns/e2e-projected-6946 pod/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b_e2e-projected-6946_e7e57198-eba3-44fb-b353-46429cd0d40b_0(2dce53f0d70499e63eae51d0e78d0799291d689f52c24819ae587164f09b6457): [e2e-projected-6946/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:27:32.696 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:27:33.095 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:27:33.140 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:27:34.548 I ns/e2e-projected-1848 pod/downwardapi-volume-993610de-c735-488b-ba90-1fe9ceb041e6 reason/AddedInterface Add eth0 [10.128.160.168/23]
Sep 09 08:27:34.721 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 08:27:35.307 I ns/e2e-projected-1848 pod/downwardapi-volume-993610de-c735-488b-ba90-1fe9ceb041e6 node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:27:35.653 I ns/e2e-projected-1848 pod/downwardapi-volume-993610de-c735-488b-ba90-1fe9ceb041e6 node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Created
Sep 09 08:27:35.705 I ns/e2e-projected-1848 pod/downwardapi-volume-993610de-c735-488b-ba90-1fe9ceb041e6 node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Started
Sep 09 08:27:37.146 I ns/e2e-services-252 pod/affinity-clusterip-timeout-j28fn reason/AddedInterface Add eth0 [10.128.148.36/23]
Sep 09 08:27:37.497 W ns/e2e-projected-1848 pod/downwardapi-volume-993610de-c735-488b-ba90-1fe9ceb041e6 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:27:37.895 I ns/e2e-services-252 pod/affinity-clusterip-timeout-j28fn node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip-timeout reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:27:38.015 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver\" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)"
Sep 09 08:27:38.285 I ns/e2e-services-252 pod/affinity-clusterip-timeout-j28fn node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip-timeout reason/Created
Sep 09 08:27:38.360 I ns/e2e-services-252 pod/affinity-clusterip-timeout-j28fn node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip-timeout reason/Started
Sep 09 08:27:38.932 I ns/e2e-services-252 pod/affinity-clusterip-timeout-j28fn node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip-timeout reason/Ready
Sep 09 08:27:39.417 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:27:39.439 I ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Killing
Sep 09 08:27:39.474 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulDelete delete Pod ss2-2 in StatefulSet ss2 successful (3 times)
Sep 09 08:27:39.651 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(61af4f8437e4d774bf1e67af6811ee9a6e0a09abd55e8366da37336e50576632): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500  (14 times)
Sep 09 08:27:39.846 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: Get "http://10.128.125.95:80/index.html": read tcp 10.196.3.122:53804->10.128.125.95:80: read: connection reset by peer
Sep 09 08:27:39.896 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/NotReady
Sep 09 08:27:41.683 W ns/e2e-projected-1848 pod/downwardapi-volume-993610de-c735-488b-ba90-1fe9ceb041e6 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:27:41.718 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: Get "http://10.128.125.95:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 09 08:27:42.038 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:27:42.713 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: Get "http://10.128.125.95:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (2 times)
Sep 09 08:27:42.799 I ns/e2e-services-252 pod/affinity-clusterip-timeout-7bs6z reason/AddedInterface Add eth0 [10.128.149.64/23]
Sep 09 08:27:43.403 I ns/e2e-services-252 pod/affinity-clusterip-timeout-7bs6z node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip-timeout reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:27:43.756 I ns/e2e-services-252 pod/affinity-clusterip-timeout-7bs6z node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip-timeout reason/Created
Sep 09 08:27:43.821 I ns/e2e-services-252 pod/affinity-clusterip-timeout-7bs6z node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip-timeout reason/Started
Sep 09 08:27:43.953 W ns/e2e-statefulset-5052 pod/ss2-2 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:27:44.008 I ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Killing
Sep 09 08:27:44.013 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:27:44.016 I ns/e2e-services-252 pod/affinity-clusterip-timeout-7bs6z node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip-timeout reason/Ready
Sep 09 08:27:44.032 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulDelete delete Pod ss2-1 in StatefulSet ss2 successful (3 times)
Sep 09 08:27:44.329 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: Get "http://10.128.124.18:80/index.html": dial tcp 10.128.124.18:80: connect: connection refused
Sep 09 08:27:44.374 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/NotReady
Sep 09 08:27:45.054 I ns/e2e-emptydir-1596 pod/pod-e0faa4fe-80c0-4264-91b4-e796bd4e096d node/ reason/Created
Sep 09 08:27:45.204 I ns/e2e-emptydir-1596 pod/pod-e0faa4fe-80c0-4264-91b4-e796bd4e096d node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:27:45.906 I ns/e2e-services-252 pod/execpod-affinityxwmtc node/ reason/Created
Sep 09 08:27:46.058 I ns/e2e-services-252 pod/execpod-affinityxwmtc node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:27:46.307 W ns/e2e-emptydir-1596 pod/pod-e0faa4fe-80c0-4264-91b4-e796bd4e096d node/ostest-5xqm8-worker-0-cbbx9 reason/FailedMount MountVolume.SetUp failed for volume "default-token-9ksvl" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:27:47.832 I ns/e2e-statefulset-9097 pod/ss-0 reason/AddedInterface Add eth0 [10.128.165.207/23]
Sep 09 08:27:48.520 I ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:27:48.955 I ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Created
Sep 09 08:27:49.041 I ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Started
Sep 09 08:27:49.954 I ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:27:51.225 W ns/e2e-downward-api-2833 pod/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(12bacbe140ecd14293513c24692245293a2f90976719fee6cc983834bf63a75b): [e2e-downward-api-2833/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:27:51.732 W ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-cl8gs_e2e-webhook-5914_e8aad72c-b47e-4bdb-8d68-c0f42ea31d83_0(c6b822d3c2dabe1703864c063f659c24f0e9fc0a1263af6500f5dd2f27ba4aaf): [e2e-webhook-5914/sample-webhook-deployment-7bc8486f8c-cl8gs:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:27:51.769 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404
Sep 09 08:27:51.785 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/NotReady
Sep 09 08:27:51.907 W ns/e2e-pods-6560 pod/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e_e2e-pods-6560_61187a01-8366-45cc-b2fc-9f3c00eb71a0_0(c70e2311bcb1050f756114f8ab358bf13d868abc4a758688494f818a949a1642): [e2e-pods-6560/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:27:52.818 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (2 times)
Sep 09 08:27:53.771 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (3 times)
Sep 09 08:27:54.759 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (4 times)
Sep 09 08:27:55.028 I ns/e2e-services-252 pod/execpod-affinityxwmtc node/ostest-5xqm8-worker-0-rzx47 container/agnhost-pause reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:27:55.350 I ns/e2e-services-252 pod/execpod-affinityxwmtc node/ostest-5xqm8-worker-0-rzx47 container/agnhost-pause reason/Created
Sep 09 08:27:55.405 I ns/e2e-services-252 pod/execpod-affinityxwmtc node/ostest-5xqm8-worker-0-rzx47 container/agnhost-pause reason/Started
Sep 09 08:27:55.763 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (5 times)
Sep 09 08:27:56.231 I ns/e2e-services-252 pod/execpod-affinityxwmtc node/ostest-5xqm8-worker-0-rzx47 container/agnhost-pause reason/Ready
Sep 09 08:27:56.506 W ns/e2e-projected-6946 pod/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b_e2e-projected-6946_e7e57198-eba3-44fb-b353-46429cd0d40b_0(37bc62275a7ef1cc46fdf221cd2fed1d41f5f7a60bad3dc7b92c62c95bea0f23): [e2e-projected-6946/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:27:56.767 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (6 times)
Sep 09 08:27:57.231 W ns/e2e-statefulset-5052 pod/ss2-1 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:27:57.252 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:27:57.267 I ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Killing
Sep 09 08:27:57.272 I ns/e2e-statefulset-5052 statefulset/ss2 reason/SuccessfulDelete delete Pod ss2-0 in StatefulSet ss2 successful (3 times)
Sep 09 08:27:57.683 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.128.125.161:80/index.html": dial tcp 10.128.125.161:80: connect: connection refused
Sep 09 08:27:57.712 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/NotReady
Sep 09 08:27:57.781 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (7 times)
Sep 09 08:27:58.773 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (8 times)
Sep 09 08:27:59.053 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:27:59.662 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.128.125.161:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 09 08:27:59.767 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (9 times)
Sep 09 08:28:00.628 I ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 reason/AddedInterface Add eth0 [10.128.151.226/23]
Sep 09 08:28:00.776 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (10 times)
Sep 09 08:28:01.267 I ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:28:01.360 I ns/e2e-statefulset-9097 pod/ss-1 node/ reason/Created
Sep 09 08:28:01.382 I ns/e2e-statefulset-9097 statefulset/ss reason/SuccessfulCreate create Pod ss-1 in StatefulSet ss successful
Sep 09 08:28:01.393 I ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:28:01.401 I ns/e2e-statefulset-9097 pod/ss-2 node/ reason/Created
Sep 09 08:28:01.426 I ns/e2e-statefulset-9097 statefulset/ss reason/SuccessfulCreate create Pod ss-2 in StatefulSet ss successful
Sep 09 08:28:01.481 I ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:28:01.695 I ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Created
Sep 09 08:28:01.774 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (11 times)
Sep 09 08:28:01.805 I ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Started
Sep 09 08:28:02.259 I ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Ready
Sep 09 08:28:02.775 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (12 times)
Sep 09 08:28:03.772 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (13 times)
Sep 09 08:28:04.781 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (14 times)
Sep 09 08:28:05.760 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (15 times)
Sep 09 08:28:06.768 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (16 times)
Sep 09 08:28:06.789 W ns/e2e-statefulset-5052 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:28:07.045 W ns/e2e-emptydir-1596 pod/pod-e0faa4fe-80c0-4264-91b4-e796bd4e096d node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-e0faa4fe-80c0-4264-91b4-e796bd4e096d_e2e-emptydir-1596_f7869a91-18e0-4e47-8e06-73ec8e401d19_0(c6207b728b40852c17fede8f113f524b45790a022f93778aa57d4c199cee0b5e): [e2e-emptydir-1596/pod-e0faa4fe-80c0-4264-91b4-e796bd4e096d:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:28:07.062 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss-1_e2e-statefulset-9097_251ea516-8b32-4302-9af8-a2a14ab2f5b6_0(742e7f4993273e81a06d1e9604e8a0ba536871f466fa43b2d2fb44ffe55b73d4): [e2e-statefulset-9097/ss-1:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:28:07.079 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ss2-0_e2e-statefulset-4657_84e19760-8b10-4699-9dee-21281da8d077_0(eea80d321f96241afd8ad563d94d05340e6ec0f84c11a83a2fc3beadacb42ece): [e2e-statefulset-4657/ss2-0:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n (15 times)
Sep 09 08:28:07.145 W ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sample-webhook-deployment-7bc8486f8c-cl8gs_e2e-webhook-5914_e8aad72c-b47e-4bdb-8d68-c0f42ea31d83_0(d31a29418368d856a1c84d278d8698a58bdfc0e6e252fd852c2d2d0eb38b42d7): [e2e-webhook-5914/sample-webhook-deployment-7bc8486f8c-cl8gs:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": read tcp 127.0.0.1:43082->127.0.0.1:5036: read: connection reset by peer
Sep 09 08:28:07.765 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (17 times)
Sep 09 08:28:08.019 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:28:08.058 I ns/e2e-statefulset-4657 statefulset/ss2 reason/SuccessfulDelete delete Pod ss2-0 in StatefulSet ss2 successful
Sep 09 08:28:08.097 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/NotReady
Sep 09 08:28:08.097 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Restarted
Sep 09 08:28:08.204 W ns/e2e-projected-6946 pod/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b_e2e-projected-6946_e7e57198-eba3-44fb-b353-46429cd0d40b_0(0662b0a40e572d6e517fa910f5bba3bfbf7ea61e056d7010325fb687caaa6d66): [e2e-projected-6946/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": dial tcp [::1]:5036: connect: connection refused
Sep 09 08:28:08.283 W clusteroperator/network changed Progressing to True: Deploying: DaemonSet "openshift-kuryr/kuryr-cni" is not available (awaiting 1 nodes)
Sep 09 08:28:08.762 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (18 times)
Sep 09 08:28:09.776 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (19 times)
Sep 09 08:28:10.939 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (20 times)
Sep 09 08:28:11.481 I ns/e2e-kubelet-test-1041 pod/busybox-readonly-fs080e45ba-ba0a-4087-877b-1b75516e534a node/ reason/Created
Sep 09 08:28:11.684 I ns/e2e-kubelet-test-1041 pod/busybox-readonly-fs080e45ba-ba0a-4087-877b-1b75516e534a node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:28:11.780 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (21 times)
Sep 09 08:28:12.779 I ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:28:13.207 W ns/e2e-downward-api-2833 pod/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(7979bd224059470b42ca006356ab4a0fddcd64d29243237db534f1d9a15d2b61): [e2e-downward-api-2833/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:28:13.831 W ns/e2e-pods-6560 pod/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e_e2e-pods-6560_61187a01-8366-45cc-b2fc-9f3c00eb71a0_0(ab9c6ee7e849e82e95d8b70c677ee3493cba58e6fab0afeb5de903326cf646e9): [e2e-pods-6560/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:28:16.587 W ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500
Sep 09 08:28:16.614 I ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Killing
Sep 09 08:28:16.812 I ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:28:16.831 W ns/e2e-statefulset-4657 pod/ss2-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:28:17.088 I ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Created
Sep 09 08:28:17.118 I ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Started
Sep 09 08:28:17.321 W ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Restarted
Sep 09 04:28:18.243 I test="[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" failed
Sep 09 08:28:19.335 W ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:28:19.427 I ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ostest-5xqm8-worker-0-rzx47 container/liveness reason/Killing
Sep 09 08:28:19.830 I ns/e2e-emptydir-3468 pod/pod-5edc8e18-1561-40fd-9d06-f88b38957080 node/ reason/Created
Sep 09 08:28:20.006 I ns/e2e-emptydir-3468 pod/pod-5edc8e18-1561-40fd-9d06-f88b38957080 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:28:20.800 I ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs reason/AddedInterface Add eth0 [10.128.195.153/23]
Sep 09 08:28:20.900 I ns/e2e-emptydir-1596 pod/pod-e0faa4fe-80c0-4264-91b4-e796bd4e096d reason/AddedInterface Add eth0 [10.128.136.116/23]
Sep 09 08:28:21.094 W ns/e2e-container-probe-7219 pod/liveness-158ab93b-151e-4ffd-b088-77a714cddc87 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:28:21.561 I ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:28:21.669 I ns/e2e-emptydir-1596 pod/pod-e0faa4fe-80c0-4264-91b4-e796bd4e096d node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:28:21.755 I ns/e2e-statefulset-9097 pod/ss-2 reason/AddedInterface Add eth0 [10.128.165.174/23]
Sep 09 08:28:21.760 I ns/e2e-secrets-1707 pod/pod-configmaps-f6a94948-d117-45f9-be78-41b071609820 node/ reason/Created
Sep 09 08:28:21.818 I ns/e2e-secrets-1707 pod/pod-configmaps-f6a94948-d117-45f9-be78-41b071609820 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:28:21.897 I ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Created
Sep 09 08:28:21.993 I ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Started
Sep 09 08:28:22.078 I ns/e2e-emptydir-1596 pod/pod-e0faa4fe-80c0-4264-91b4-e796bd4e096d node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Created
Sep 09 08:28:22.176 I ns/e2e-emptydir-1596 pod/pod-e0faa4fe-80c0-4264-91b4-e796bd4e096d node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Started
Sep 09 08:28:22.604 I ns/e2e-projected-6946 pod/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b reason/AddedInterface Add eth0 [10.128.196.9/23]
Sep 09 08:28:22.610 I ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:28:22.969 I ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/Ready
Sep 09 08:28:22.997 I ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Created
Sep 09 08:28:23.115 I ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Started
Sep 09 08:28:23.307 I ns/e2e-projected-6946 pod/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b node/ostest-5xqm8-worker-0-cbbx9 container/projected-configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:28:23.627 I ns/e2e-projected-6946 pod/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b node/ostest-5xqm8-worker-0-cbbx9 container/projected-configmap-volume-test reason/Created
Sep 09 08:28:23.750 I ns/e2e-projected-6946 pod/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b node/ostest-5xqm8-worker-0-cbbx9 container/projected-configmap-volume-test reason/Started
Sep 09 08:28:23.869 W ns/e2e-emptydir-1596 pod/pod-e0faa4fe-80c0-4264-91b4-e796bd4e096d node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:28:24.051 I ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Ready
Sep 09 08:28:25.384 W ns/e2e-projected-6946 pod/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:28:25.648 W ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:28:26.481 I ns/e2e-statefulset-9097 pod/ss-1 reason/AddedInterface Add eth0 [10.128.165.208/23]
Sep 09 08:28:26.969 I ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Ready
Sep 09 08:28:26.973 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ reason/Created
Sep 09 08:28:27.088 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:28:27.259 W clusteroperator/network changed Progressing to False
Sep 09 08:28:27.289 I ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:28:27.308 W ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:28:27.308 W ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ostest-5xqm8-worker-0-cbbx9 container/sample-webhook reason/NotReady
Sep 09 08:28:27.380 W ns/e2e-projected-6946 pod/pod-projected-configmaps-615d2bbb-5c17-4d9f-8035-2ee0a3e8f67b node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:28:27.638 I ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:28:27.712 I ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:28:28.263 W ns/e2e-emptydir-1596 pod/pod-e0faa4fe-80c0-4264-91b4-e796bd4e096d node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:28:28.402 W ns/e2e-webhook-5914 pod/sample-webhook-deployment-7bc8486f8c-cl8gs node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:28:28.445 I ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 08:28:28.669 I ns/e2e-pod-network-test-720 pod/netserver-0 node/ reason/Created
Sep 09 08:28:28.696 I ns/e2e-pod-network-test-720 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:28:28.742 I ns/e2e-pod-network-test-720 pod/netserver-1 node/ reason/Created
Sep 09 08:28:28.775 I ns/e2e-pod-network-test-720 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:28:28.784 I ns/e2e-pod-network-test-720 pod/netserver-2 node/ reason/Created
Sep 09 08:28:28.814 I ns/e2e-pod-network-test-720 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:28:31.498 W ns/openshift-operator-lifecycle-manager pod/packageserver-6bb6556b69-jpnn8 node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "https://10.128.5.10:5443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (7 times)
Sep 09 08:28:31.754 W ns/openshift-operator-lifecycle-manager pod/packageserver-6bb6556b69-jpnn8 node/ostest-5xqm8-master-0 reason/Unhealthy Readiness probe failed: Get "https://10.128.5.10:5443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (2 times)
Sep 09 08:28:33.636 I ns/e2e-deployment-2153 deployment/test-recreate-deployment reason/ScalingReplicaSet Scaled up replica set test-recreate-deployment-7589bf48bb to 1
Sep 09 08:28:33.693 I ns/e2e-deployment-2153 pod/test-recreate-deployment-7589bf48bb-mxdgq node/ reason/Created
Sep 09 08:28:33.712 I ns/e2e-deployment-2153 replicaset/test-recreate-deployment-7589bf48bb reason/SuccessfulCreate Created pod: test-recreate-deployment-7589bf48bb-mxdgq
Sep 09 08:28:33.764 I ns/e2e-deployment-2153 pod/test-recreate-deployment-7589bf48bb-mxdgq node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:28:34.835 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (22 times)
Sep 09 08:28:34.840 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/NotReady
Sep 09 08:28:35.260 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404
Sep 09 08:28:35.291 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/NotReady
Sep 09 08:28:36.017 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404
Sep 09 08:28:36.041 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/NotReady
Sep 09 08:28:36.261 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (2 times)
Sep 09 08:28:37.020 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (2 times)
Sep 09 08:28:37.258 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (3 times)
Sep 09 08:28:38.014 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (3 times)
Sep 09 08:28:38.437 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (4 times)
Sep 09 08:28:39.048 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (4 times)
Sep 09 08:28:39.320 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (5 times)
Sep 09 08:28:39.521 W ns/e2e-downward-api-2833 pod/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(5a9ac2a49d52287069dfb4e56772b4309833025a87034d9d9525f333058597e2): [e2e-downward-api-2833/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:28:40.055 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (5 times)
Sep 09 08:28:40.305 W ns/e2e-pods-6560 pod/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e_e2e-pods-6560_61187a01-8366-45cc-b2fc-9f3c00eb71a0_0(a0f9b4898f7f79e083af6c7a1509499f0e329b0bcd8872b0ef22e6e6f30f276b): [e2e-pods-6560/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:28:40.336 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (6 times)
Sep 09 08:28:41.103 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (6 times)
Sep 09 08:28:41.275 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (7 times)
Sep 09 08:28:41.525 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (11 times)
Sep 09 08:28:42.030 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (7 times)
Sep 09 08:28:42.263 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (8 times)
Sep 09 08:28:43.026 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (8 times)
Sep 09 08:28:43.276 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (9 times)
Sep 09 08:28:44.007 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (9 times)
Sep 09 08:28:44.268 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (10 times)
Sep 09 08:28:45.014 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (10 times)
Sep 09 08:28:45.363 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404 (11 times)
Sep 09 08:28:45.751 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:28:45.810 I ns/e2e-statefulset-9097 statefulset/ss reason/SuccessfulDelete delete Pod ss-2 in StatefulSet ss successful
Sep 09 08:28:45.822 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:28:45.834 I ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Killing
Sep 09 08:28:45.860 I ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Killing
Sep 09 08:28:45.860 I ns/e2e-statefulset-9097 statefulset/ss reason/SuccessfulDelete delete Pod ss-1 in StatefulSet ss successful
Sep 09 08:28:45.886 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:28:45.959 I ns/e2e-statefulset-9097 statefulset/ss reason/SuccessfulDelete delete Pod ss-0 in StatefulSet ss successful
Sep 09 08:28:46.034 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: Get "http://10.128.165.174:80/index.html": dial tcp 10.128.165.174:80: connect: connection refused
Sep 09 08:28:46.275 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.128.165.208:80/index.html": dial tcp 10.128.165.208:80: connect: connection refused
Sep 09 08:28:47.877 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:28:47.911 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:28:47.972 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:28:48.242 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: Get "http://10.128.165.174:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 09 08:28:48.301 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe failed: Get "http://10.128.165.208:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 09 08:28:48.968 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:28:50.625 I ns/e2e-kubelet-test-1041 pod/busybox-readonly-fs080e45ba-ba0a-4087-877b-1b75516e534a reason/AddedInterface Add eth0 [10.128.127.206/23]
Sep 09 08:28:51.274 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (12 times)
Sep 09 08:28:51.366 I ns/e2e-kubelet-test-1041 pod/busybox-readonly-fs080e45ba-ba0a-4087-877b-1b75516e534a node/ostest-5xqm8-worker-0-rzx47 container/busybox-readonly-fs080e45ba-ba0a-4087-877b-1b75516e534a reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:28:53.102 I ns/e2e-kubelet-test-1041 pod/busybox-readonly-fs080e45ba-ba0a-4087-877b-1b75516e534a node/ostest-5xqm8-worker-0-rzx47 container/busybox-readonly-fs080e45ba-ba0a-4087-877b-1b75516e534a reason/Ready
Sep 09 08:28:55.872 I ns/e2e-container-lifecycle-hook-350 pod/pod-handle-http-request node/ reason/Created
Sep 09 08:28:55.919 I ns/e2e-container-lifecycle-hook-350 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:28:57.225 W ns/e2e-statefulset-9097 pod/ss-1 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:28:57.431 W ns/e2e-statefulset-9097 pod/ss-0 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:29:00.163 W ns/e2e-statefulset-9097 pod/ss-2 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:29:00.594 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (33 times)
Sep 09 08:29:01.028 E clusteroperator/kube-apiserver changed Degraded to True: StaticPods_Error: StaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container "kube-apiserver-check-endpoints" is not ready: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container "kube-apiserver-check-endpoints" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)
Sep 09 08:29:01.070 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("StaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)")
Sep 09 08:29:01.269 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (13 times)
Sep 09 08:29:01.968 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ostest.shiftstack.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=5s: context deadline exceeded
Sep 09 08:29:01.968 I oauth-apiserver OAuth API stopped responding to GET requests: Get https://api.ostest.shiftstack.com:6443/apis/oauth.openshift.io/v1/oauthaccesstokens/missing?timeout=5s: context deadline exceeded
Sep 09 08:29:01.968 E kube-apiserver Kube API started failing: Get https://api.ostest.shiftstack.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded
Sep 09 08:29:02.078 W ns/e2e-kubelet-test-1041 pod/busybox-readonly-fs080e45ba-ba0a-4087-877b-1b75516e534a node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:29:03.163 W ns/e2e-downward-api-2833 pod/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(b6d2c4b189a38179d5b59e5598f1ef4512a23399188070400ba217fed054ed49): [e2e-downward-api-2833/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:29:03.956 W ns/e2e-pods-6560 pod/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e_e2e-pods-6560_61187a01-8366-45cc-b2fc-9f3c00eb71a0_0(9df5daf386e19c31342e4d84e86f7ac2bb30d71c57bb340128917ef1ed834989): [e2e-pods-6560/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:29:03.968 - 14s   E kube-apiserver Kube API is not responding to GET requests
Sep 09 08:29:03.968 - 14s   E oauth-apiserver OAuth API is not responding to GET requests
Sep 09 08:29:03.968 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 09 08:29:07.226 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: Get "http://10.196.3.122:8090/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (5 times)
Sep 09 08:29:11.265 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (14 times)
Sep 09 08:29:11.757 I ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbbdc954cefdfc297e03ef3b9b36211b8f7378378a89e071585e70b1121161d2
Sep 09 08:29:12.352 W clusteroperator/authentication changed Available to False: WellKnown_NotReady: WellKnownAvailable: The well-known endpoint is not yet available: failed to GET well-known https://10.196.3.65:6443/.well-known/oauth-authorization-server: dial tcp 10.196.3.65:6443: connect: connection refused
Sep 09 08:29:12.374 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Available changed from True to False ("WellKnownAvailable: The well-known endpoint is not yet available: failed to GET well-known https://10.196.3.65:6443/.well-known/oauth-authorization-server: dial tcp 10.196.3.65:6443: connect: connection refused")
Sep 09 08:29:12.456 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Degraded message changed from "" to "WellKnownReadyControllerDegraded: failed to GET well-known https://10.196.3.65:6443/.well-known/oauth-authorization-server: dial tcp 10.196.3.65:6443: connect: connection refused"
Sep 09 08:29:13.585 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver reason/NotReady
Sep 09 08:29:13.585 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver reason/Restarted
Sep 09 08:29:13.585 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints reason/Restarted
Sep 09 08:29:13.621 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: failed to establish a TCP connection to 10.196.3.65:6443: dial tcp 10.196.3.65:6443: connect: connection refused (8 times)
Sep 09 08:29:14.946 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-2 node/ostest-5xqm8-master-2 reason/Unhealthy Liveness probe failed: Get "https://10.196.1.17:6443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (5 times)
Sep 09 08:29:14.980 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-2 node/ostest-5xqm8-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.196.1.17:6443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (5 times)
Sep 09 08:29:15.974 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-2 node/ostest-5xqm8-master-2 reason/Unhealthy Liveness probe failed: Get "https://10.196.1.17:17697/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (4 times)
Sep 09 08:29:16.140 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: failed to establish a TCP connection to 10.196.3.65:6443: dial tcp 10.196.3.65:6443: connect: connection refused (8 times)
Sep 09 08:29:17.485 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: failed to establish a TCP connection to 10.196.3.65:6443: dial tcp 10.196.3.65:6443: connect: connection refused (8 times)
Sep 09 08:29:18.555 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500
Sep 09 08:29:18.580 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (33 times)
Sep 09 08:29:18.580 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("StaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)")
Sep 09 08:29:18.580 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (13 times)
Sep 09 08:29:18.584 W ns/e2e-downward-api-2833 pod/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(b6d2c4b189a38179d5b59e5598f1ef4512a23399188070400ba217fed054ed49): [e2e-downward-api-2833/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:29:18.584 W ns/e2e-pods-6560 pod/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e_e2e-pods-6560_61187a01-8366-45cc-b2fc-9f3c00eb71a0_0(9df5daf386e19c31342e4d84e86f7ac2bb30d71c57bb340128917ef1ed834989): [e2e-pods-6560/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:29:18.585 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: Get "http://10.196.3.122:8090/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (5 times)
Sep 09 08:29:18.585 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (14 times)
Sep 09 08:29:18.585 I ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbbdc954cefdfc297e03ef3b9b36211b8f7378378a89e071585e70b1121161d2
Sep 09 08:29:18.585 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Available changed from True to False ("WellKnownAvailable: The well-known endpoint is not yet available: failed to GET well-known https://10.196.3.65:6443/.well-known/oauth-authorization-server: dial tcp 10.196.3.65:6443: connect: connection refused")
Sep 09 08:29:18.585 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Degraded message changed from "" to "WellKnownReadyControllerDegraded: failed to GET well-known https://10.196.3.65:6443/.well-known/oauth-authorization-server: dial tcp 10.196.3.65:6443: connect: connection refused"
Sep 09 08:29:18.585 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: failed to establish a TCP connection to 10.196.3.65:6443: dial tcp 10.196.3.65:6443: connect: connection refused (8 times)
Sep 09 08:29:18.585 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-2 node/ostest-5xqm8-master-2 reason/Unhealthy Liveness probe failed: Get "https://10.196.1.17:6443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (5 times)
Sep 09 08:29:18.586 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-2 node/ostest-5xqm8-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.196.1.17:6443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (5 times)
Sep 09 08:29:18.586 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-2 node/ostest-5xqm8-master-2 reason/Unhealthy Liveness probe failed: Get "https://10.196.1.17:17697/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (4 times)
Sep 09 08:29:18.586 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: failed to establish a TCP connection to 10.196.3.65:6443: dial tcp 10.196.3.65:6443: connect: connection refused (8 times)
Sep 09 08:29:18.586 W ns/openshift-apiserver deployment/apiserver reason/ConnectivityOutageDetected Connectivity outage detected: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: failed to establish a TCP connection to 10.196.3.65:6443: dial tcp 10.196.3.65:6443: connect: connection refused (8 times)
Sep 09 08:29:18.586 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500
Sep 09 08:29:19.578 I kube-apiserver Kube API started responding to GET requests
Sep 09 08:29:19.578 I oauth-apiserver OAuth API started responding to GET requests
Sep 09 08:29:19.582 I openshift-apiserver OpenShift API started responding to GET requests
Sep 09 08:29:19.699 W clusteroperator/kube-apiserver changed Degraded to False: AsExpected: NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container "kube-apiserver" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container "kube-apiserver-check-endpoints" is not ready: unknown reason
Sep 09 08:29:19.817 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver\" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: unknown reason")
Sep 09 08:29:20.227 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-2 node/ostest-5xqm8-master-2 reason/Unhealthy Liveness probe failed: Get "https://10.196.1.17:6443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (6 times)
Sep 09 08:29:21.382 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (15 times)
Sep 09 08:29:21.657 I ns/e2e-pod-network-test-720 pod/netserver-2 reason/AddedInterface Add eth0 [10.128.156.25/23]
Sep 09 08:29:22.442 I ns/e2e-pod-network-test-720 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:29:22.839 I ns/e2e-pod-network-test-720 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Created
Sep 09 08:29:22.928 I ns/e2e-pod-network-test-720 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Started
Sep 09 08:29:23.946 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-2 node/ostest-5xqm8-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.196.1.17:17697/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (4 times)
Sep 09 08:29:23.988 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-2 node/ostest-5xqm8-master-2 reason/Unhealthy Readiness probe failed: Get "https://10.196.1.17:6443/healthz": context deadline exceeded (2 times)
Sep 09 08:29:24.236 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-2 node/ostest-5xqm8-master-2 reason/Unhealthy Liveness probe failed: Get "https://10.196.1.17:6443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (5 times)
Sep 09 08:29:24.862 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: failed to GET well-known https://10.196.3.65:6443/.well-known/oauth-authorization-server: dial tcp 10.196.3.65:6443: connect: connection refused" to "WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 2"
Sep 09 08:29:24.980 I ns/e2e-pod-network-test-720 pod/netserver-1 reason/AddedInterface Add eth0 [10.128.156.215/23]
Sep 09 08:29:24.980 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to GET well-known https://10.196.3.65:6443/.well-known/oauth-authorization-server: dial tcp 10.196.3.65:6443: connect: connection refused" to "WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2"
Sep 09 08:29:25.031 I ns/e2e-secrets-1707 pod/pod-configmaps-f6a94948-d117-45f9-be78-41b071609820 reason/AddedInterface Add eth0 [10.128.134.139/23]
Sep 09 08:29:25.156 W ns/e2e-downward-api-2833 pod/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 04:29:25.273 I test="[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] [sig-node] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" failed
Sep 09 08:29:25.306 I ns/e2e-emptydir-3468 pod/pod-5edc8e18-1561-40fd-9d06-f88b38957080 reason/AddedInterface Add eth0 [10.128.129.124/23]
Sep 09 08:29:25.667 I ns/e2e-secrets-1707 pod/pod-configmaps-f6a94948-d117-45f9-be78-41b071609820 node/ostest-5xqm8-worker-0-rzx47 container/env-test reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:29:25.730 I ns/e2e-pod-network-test-720 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:29:25.999 I ns/e2e-secrets-1707 pod/pod-configmaps-f6a94948-d117-45f9-be78-41b071609820 node/ostest-5xqm8-worker-0-rzx47 container/env-test reason/Created
Sep 09 08:29:26.025 I ns/e2e-pod-network-test-720 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Created
Sep 09 08:29:26.108 I ns/e2e-emptydir-3468 pod/pod-5edc8e18-1561-40fd-9d06-f88b38957080 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:29:26.111 I ns/e2e-pod-network-test-720 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Started
Sep 09 08:29:26.160 I ns/e2e-secrets-1707 pod/pod-configmaps-f6a94948-d117-45f9-be78-41b071609820 node/ostest-5xqm8-worker-0-rzx47 container/env-test reason/Started
Sep 09 08:29:26.160 I ns/e2e-pod-network-test-720 pod/netserver-0 reason/AddedInterface Add eth0 [10.128.157.66/23]
Sep 09 08:29:26.254 I ns/e2e-deployment-2153 pod/test-recreate-deployment-7589bf48bb-mxdgq reason/AddedInterface Add eth0 [10.128.167.71/23]
Sep 09 08:29:26.449 I ns/e2e-downward-api-9118 pod/downwardapi-volume-dcba0dab-d9c0-4201-b35e-171d9b9ad2f4 node/ reason/Created
Sep 09 08:29:26.450 I ns/e2e-emptydir-3468 pod/pod-5edc8e18-1561-40fd-9d06-f88b38957080 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Created
Sep 09 08:29:26.515 I ns/e2e-downward-api-9118 pod/downwardapi-volume-dcba0dab-d9c0-4201-b35e-171d9b9ad2f4 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:29:26.595 I ns/e2e-emptydir-3468 pod/pod-5edc8e18-1561-40fd-9d06-f88b38957080 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Started
Sep 09 08:29:27.048 I ns/e2e-pod-network-test-720 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:29:27.178 W ns/e2e-downward-api-2833 pod/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downward-api-61172db3-3d96-4583-904f-a2283b9cd03c_e2e-downward-api-2833_a023adb2-dccb-4250-a964-6e820e7dafb5_0(6019f12c95e63a1856b7a3c8d2b28f8af9957f129356c89d19fad546450feb08): [e2e-downward-api-2833/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:29:27.180 I ns/e2e-deployment-2153 pod/test-recreate-deployment-7589bf48bb-mxdgq node/ostest-5xqm8-worker-0-cbbx9 container/agnhost reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:29:27.400 I ns/e2e-pod-network-test-720 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:29:27.462 I ns/e2e-pod-network-test-720 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:29:27.506 I ns/e2e-deployment-2153 pod/test-recreate-deployment-7589bf48bb-mxdgq node/ostest-5xqm8-worker-0-cbbx9 container/agnhost reason/Created
Sep 09 08:29:27.599 I ns/e2e-deployment-2153 pod/test-recreate-deployment-7589bf48bb-mxdgq node/ostest-5xqm8-worker-0-cbbx9 container/agnhost reason/Started
Sep 09 08:29:28.825 I ns/e2e-deployment-2153 pod/test-recreate-deployment-7589bf48bb-mxdgq node/ostest-5xqm8-worker-0-cbbx9 container/agnhost reason/Ready
Sep 09 08:29:28.982 W ns/e2e-secrets-1707 pod/pod-configmaps-f6a94948-d117-45f9-be78-41b071609820 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:29:29.131 W ns/e2e-emptydir-3468 pod/pod-5edc8e18-1561-40fd-9d06-f88b38957080 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:29:29.835 I ns/e2e-deployment-2153 deployment/test-recreate-deployment reason/ScalingReplicaSet Scaled down replica set test-recreate-deployment-7589bf48bb to 0
Sep 09 08:29:29.892 W ns/e2e-deployment-2153 pod/test-recreate-deployment-7589bf48bb-mxdgq node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:29:29.918 I ns/e2e-deployment-2153 replicaset/test-recreate-deployment-7589bf48bb reason/SuccessfulDelete Deleted pod: test-recreate-deployment-7589bf48bb-mxdgq
Sep 09 08:29:30.405 W ns/e2e-secrets-1707 pod/pod-configmaps-f6a94948-d117-45f9-be78-41b071609820 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:29:30.440 W ns/e2e-emptydir-3468 pod/pod-5edc8e18-1561-40fd-9d06-f88b38957080 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:29:30.557 I ns/e2e-deployment-2153 pod/test-recreate-deployment-7589bf48bb-mxdgq node/ostest-5xqm8-worker-0-cbbx9 container/agnhost reason/Killing
Sep 09 08:29:31.236 W ns/e2e-deployment-2153 pod/test-recreate-deployment-7589bf48bb-mxdgq node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:29:31.262 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (16 times)
Sep 09 08:29:31.348 I ns/e2e-deployment-2153 deployment/test-recreate-deployment reason/ScalingReplicaSet Scaled up replica set test-recreate-deployment-f79dd4667 to 1
Sep 09 08:29:31.385 I ns/e2e-deployment-2153 pod/test-recreate-deployment-f79dd4667-8jhrm node/ reason/Created
Sep 09 08:29:31.438 I ns/e2e-deployment-2153 replicaset/test-recreate-deployment-f79dd4667 reason/SuccessfulCreate Created pod: test-recreate-deployment-f79dd4667-8jhrm
Sep 09 08:29:31.491 I ns/e2e-deployment-2153 pod/test-recreate-deployment-f79dd4667-8jhrm node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:29:32.471 W ns/e2e-downward-api-2833 pod/downward-api-61172db3-3d96-4583-904f-a2283b9cd03c node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:29:32.930 I ns/e2e-pod-network-test-5579 pod/netserver-0 node/ reason/Created
Sep 09 08:29:33.010 I ns/e2e-pod-network-test-5579 pod/netserver-1 node/ reason/Created
Sep 09 08:29:33.024 I ns/e2e-pod-network-test-5579 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:29:33.110 I ns/e2e-pod-network-test-5579 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:29:33.130 I ns/e2e-pod-network-test-5579 pod/netserver-2 node/ reason/Created
Sep 09 08:29:33.252 I ns/e2e-pod-network-test-5579 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 04:29:33.424 I test="[sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" failed
Sep 09 08:29:33.542 I ns/e2e-deployment-9061 pod/test-cleanup-controller-7hvkw node/ reason/Created
Sep 09 08:29:33.584 I ns/e2e-deployment-9061 replicaset/test-cleanup-controller reason/SuccessfulCreate Created pod: test-cleanup-controller-7hvkw
Sep 09 08:29:33.621 I ns/e2e-deployment-9061 pod/test-cleanup-controller-7hvkw node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:29:33.901 W ns/e2e-kubelet-test-1041 pod/busybox-readonly-fs080e45ba-ba0a-4087-877b-1b75516e534a node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:29:33.901 W ns/e2e-kubelet-test-1041 pod/busybox-readonly-fs080e45ba-ba0a-4087-877b-1b75516e534a node/ostest-5xqm8-worker-0-rzx47 container/busybox-readonly-fs080e45ba-ba0a-4087-877b-1b75516e534a reason/NotReady
Sep 09 08:29:33.968 W ns/e2e-kubelet-test-1041 pod/busybox-readonly-fs080e45ba-ba0a-4087-877b-1b75516e534a node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:29:33.968 - 314s  W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:29:34.357 W ns/e2e-pod-network-test-5579 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 reason/FailedMount MountVolume.SetUp failed for volume "default-token-j2xxx" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:29:34.725 I ns/e2e-prestop-8695 pod/server node/ reason/Created
Sep 09 08:29:34.851 I ns/e2e-prestop-8695 pod/server node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:29:35.058 I ns/e2e-pod-network-test-720 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Ready
Sep 09 08:29:36.422 W ns/e2e-pods-6560 pod/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:29:37.558 W ns/e2e-kubelet-test-1041 pod/busybox-readonly-fs080e45ba-ba0a-4087-877b-1b75516e534a node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:29:40.213 I ns/e2e-pod-network-test-720 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 08:29:40.663 I ns/e2e-pod-network-test-720 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:29:40.811 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints reason/Restarted
Sep 09 08:29:41.107 I ns/e2e-pod-network-test-720 pod/test-container-pod node/ reason/Created
Sep 09 08:29:41.223 I ns/e2e-pod-network-test-720 pod/test-container-pod node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:29:41.302 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (17 times)
Sep 09 08:29:41.355 W ns/e2e-pods-6560 pod/server-envvars-e219f22c-5f32-4ea2-9970-b1b6aaf40f8e node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:29:42.611 W ns/e2e-deployment-2153 pod/test-recreate-deployment-f79dd4667-8jhrm node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:29:43.357 W ns/e2e-deployment-2153 pod/test-recreate-deployment-f79dd4667-8jhrm node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:29:44.026 I ns/e2e-container-lifecycle-hook-350 pod/pod-handle-http-request reason/AddedInterface Add eth0 [10.128.121.30/23]
Sep 09 08:29:44.778 I ns/e2e-container-lifecycle-hook-350 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 container/pod-handle-http-request reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:29:45.148 I ns/e2e-container-lifecycle-hook-350 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 container/pod-handle-http-request reason/Created
Sep 09 08:29:45.218 I ns/e2e-container-lifecycle-hook-350 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 container/pod-handle-http-request reason/Started
Sep 09 08:29:45.959 I ns/e2e-container-lifecycle-hook-350 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 container/pod-handle-http-request reason/Ready
Sep 09 08:29:46.179 I ns/e2e-pod-network-test-720 pod/test-container-pod reason/AddedInterface Add eth0 [10.128.156.51/23]
Sep 09 08:29:46.803 I ns/e2e-pod-network-test-720 pod/test-container-pod node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:29:47.099 I ns/e2e-pod-network-test-720 pod/test-container-pod node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Created
Sep 09 08:29:47.171 I ns/e2e-pod-network-test-720 pod/test-container-pod node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Started
Sep 09 08:29:47.993 I ns/e2e-pod-network-test-720 pod/test-container-pod node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:29:48.068 I ns/e2e-container-lifecycle-hook-350 pod/pod-with-poststart-exec-hook node/ reason/Created
Sep 09 08:29:48.170 I ns/e2e-container-lifecycle-hook-350 pod/pod-with-poststart-exec-hook node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:29:51.305 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (18 times)
Sep 09 08:29:51.635 I ns/e2e-gc-3244 pod/pod1 node/ reason/Created
Sep 09 08:29:51.715 I ns/e2e-gc-3244 pod/pod1 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:29:51.747 I ns/e2e-gc-3244 pod/pod2 node/ reason/Created
Sep 09 08:29:51.801 I ns/e2e-gc-3244 pod/pod2 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:29:51.816 I ns/e2e-gc-3244 pod/pod3 node/ reason/Created
Sep 09 08:29:51.923 I ns/e2e-gc-3244 pod/pod3 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:29:51.966 W ns/e2e-gc-3244 pod/pod1 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:29:52.052 W ns/e2e-gc-3244 pod/pod2 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:29:52.132 W ns/e2e-gc-3244 pod/pod3 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:29:52.245 W ns/e2e-gc-3244 pod/pod2 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:29:52.380 W ns/e2e-gc-3244 pod/pod1 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:29:52.456 W ns/e2e-gc-3244 pod/pod3 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:29:52.458 W ns/e2e-gc-3244 pod/pod2 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod2_e2e-gc-3244_fc94682e-fe76-4edc-bc70-be35b27e5b58_0(b29a7d97981f9f6d43bf3c9c3efa3143e0738f2dae0a391df0757dad1b5255ce): Multus: [e2e-gc-3244/pod2]: error getting pod: pods "pod2" not found
Sep 09 08:29:52.807 W ns/e2e-gc-3244 pod/pod3 node/ostest-5xqm8-worker-0-twrlr reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod3_e2e-gc-3244_e3de13fd-2a80-407f-abd9-cc1cf4d6a1f4_0(cc66591c0ff975605afee88047012bc4f22a1f8b67cfadcd3280e809876bfe7d): Multus: [e2e-gc-3244/pod3]: error getting pod: pods "pod3" not found
Sep 09 08:29:53.997 I ns/e2e-downward-api-9118 pod/downwardapi-volume-dcba0dab-d9c0-4201-b35e-171d9b9ad2f4 reason/AddedInterface Add eth0 [10.128.141.191/23]
Sep 09 08:29:54.804 I ns/e2e-downward-api-9118 pod/downwardapi-volume-dcba0dab-d9c0-4201-b35e-171d9b9ad2f4 node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:29:55.202 I ns/e2e-downward-api-9118 pod/downwardapi-volume-dcba0dab-d9c0-4201-b35e-171d9b9ad2f4 node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Created
Sep 09 08:29:55.420 I ns/e2e-downward-api-9118 pod/downwardapi-volume-dcba0dab-d9c0-4201-b35e-171d9b9ad2f4 node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Started
Sep 09 08:29:55.904 I ns/e2e-pod-network-test-5579 pod/netserver-2 reason/AddedInterface Add eth0 [10.128.192.130/23]
Sep 09 08:29:56.602 I ns/e2e-pod-network-test-5579 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:29:56.885 I ns/e2e-pod-network-test-5579 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Created
Sep 09 08:29:57.037 I ns/e2e-pod-network-test-5579 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Started
Sep 09 08:29:57.053 W ns/e2e-downward-api-9118 pod/downwardapi-volume-dcba0dab-d9c0-4201-b35e-171d9b9ad2f4 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:29:58.693 E kube-apiserver failed contacting the API: Timeout: Too large resource version: 886455, current: 683998
Sep 09 08:29:58.706 W ns/e2e-pod-network-test-720 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:29:58.736 W ns/e2e-pod-network-test-720 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:29:58.769 W ns/e2e-pod-network-test-720 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:29:58.800 W ns/e2e-pod-network-test-720 pod/test-container-pod node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:29:59.794 E ns/e2e-pod-network-test-720 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver container exited with code 2 (Error): 
Sep 09 08:30:00.017 W ns/e2e-pod-network-test-720 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:30:00.017 W ns/e2e-pod-network-test-720 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/NotReady
Sep 09 08:30:00.084 E ns/e2e-pod-network-test-720 pod/test-container-pod node/ostest-5xqm8-worker-0-rzx47 container/webserver container exited with code 2 (Error): 
Sep 09 08:30:00.162 W ns/e2e-pod-network-test-720 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:30:00.162 W ns/e2e-pod-network-test-720 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/NotReady
Sep 09 08:30:00.633 W ns/e2e-downward-api-9118 pod/downwardapi-volume-dcba0dab-d9c0-4201-b35e-171d9b9ad2f4 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:30:01.270 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (19 times)
Sep 09 08:30:01.819 I ns/e2e-pod-network-test-5579 pod/netserver-1 reason/AddedInterface Add eth0 [10.128.192.154/23]
Sep 09 08:30:02.312 I ns/e2e-pod-network-test-5579 pod/netserver-0 reason/AddedInterface Add eth0 [10.128.192.226/23]
Sep 09 08:30:02.586 I ns/e2e-pod-network-test-5579 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:30:02.913 W ns/e2e-pod-network-test-720 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:30:03.022 I ns/e2e-pod-network-test-5579 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Created
Sep 09 08:30:03.022 I ns/e2e-deployment-9061 pod/test-cleanup-controller-7hvkw reason/AddedInterface Add eth0 [10.128.163.216/23]
Sep 09 08:30:03.081 I ns/e2e-pod-network-test-5579 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Started
Sep 09 08:30:03.381 I ns/e2e-pod-network-test-5579 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:30:03.716 I ns/e2e-security-context-test-129 pod/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4 node/ reason/Created
Sep 09 08:30:03.879 I ns/e2e-pod-network-test-5579 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:30:03.910 I ns/e2e-security-context-test-129 pod/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:30:03.924 I ns/e2e-pod-network-test-5579 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:30:03.968 W ns/e2e-pod-network-test-720 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:30:03.986 I ns/e2e-deployment-9061 pod/test-cleanup-controller-7hvkw node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:30:04.201 I ns/e2e-deployment-9061 pod/test-cleanup-controller-7hvkw node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Created
Sep 09 08:30:04.281 I ns/e2e-deployment-9061 pod/test-cleanup-controller-7hvkw node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Started
Sep 09 08:30:04.804 I ns/e2e-deployment-9061 pod/test-cleanup-controller-7hvkw node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/Ready
Sep 09 08:30:04.998 W ns/e2e-security-context-test-129 pod/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4 node/ostest-5xqm8-worker-0-rzx47 reason/FailedMount MountVolume.SetUp failed for volume "default-token-mxgx2" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:30:05.698 W ns/e2e-services-252 pod/execpod-affinityxwmtc node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:30:05.726 I ns/e2e-services-252 pod/execpod-affinityxwmtc node/ostest-5xqm8-worker-0-rzx47 container/agnhost-pause reason/Killing
Sep 09 08:30:06.067 W ns/e2e-services-252 pod/affinity-clusterip-timeout-j28fn node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 1s
Sep 09 08:30:06.080 W ns/e2e-services-252 pod/affinity-clusterip-timeout-7bs6z node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 1s
Sep 09 08:30:06.114 I ns/e2e-services-252 pod/affinity-clusterip-timeout-7bs6z node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip-timeout reason/Killing
Sep 09 08:30:06.136 I ns/e2e-services-252 pod/affinity-clusterip-timeout-j28fn node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip-timeout reason/Killing
Sep 09 08:30:06.145 W ns/e2e-services-252 pod/affinity-clusterip-timeout-w9fc2 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 1s
Sep 09 08:30:06.202 I ns/e2e-services-252 pod/affinity-clusterip-timeout-w9fc2 node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip-timeout reason/Killing
Sep 09 08:30:06.228 W ns/e2e-services-252 service/affinity-clusterip-timeout reason/FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service e2e-services-252/affinity-clusterip-timeout: Error updating affinity-clusterip-timeout-t8p69 EndpointSlice for Service e2e-services-252/affinity-clusterip-timeout: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "affinity-clusterip-timeout-t8p69": the object has been modified; please apply your changes to the latest version and try again
Sep 09 08:30:06.244 W ns/e2e-services-252 endpoints/affinity-clusterip-timeout reason/FailedToUpdateEndpoint Failed to update endpoint e2e-services-252/affinity-clusterip-timeout: Operation cannot be fulfilled on endpoints "affinity-clusterip-timeout": the object has been modified; please apply your changes to the latest version and try again
Sep 09 08:30:06.748 I ns/e2e-deployment-9061 deployment/test-cleanup-deployment reason/ScalingReplicaSet Scaled up replica set test-cleanup-deployment-bccdddf9b to 1
Sep 09 08:30:06.817 I ns/e2e-deployment-9061 pod/test-cleanup-deployment-bccdddf9b-mthk4 node/ reason/Created
Sep 09 08:30:06.883 W ns/e2e-pod-network-test-720 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:30:07.111 W ns/e2e-services-252 pod/execpod-affinityxwmtc node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:30:07.111 W ns/e2e-services-252 pod/execpod-affinityxwmtc node/ostest-5xqm8-worker-0-rzx47 container/agnhost-pause reason/NotReady
Sep 09 08:30:07.287 W ns/e2e-pod-network-test-720 pod/test-container-pod node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:30:07.378 W ns/e2e-pod-network-test-720 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:30:08.022 I ns/e2e-configmap-3358 pod/pod-configmaps-f680087a-45f9-4846-9511-1cae740e0b6e node/ reason/Created
Sep 09 08:30:08.069 I ns/e2e-configmap-3358 pod/pod-configmaps-f680087a-45f9-4846-9511-1cae740e0b6e node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:30:08.085 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/NotReady
Sep 09 08:30:08.085 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Restarted
Sep 09 08:30:08.429 W clusteroperator/network changed Progressing to True: Deploying: Deployment "openshift-kuryr/kuryr-controller" is not available (awaiting 1 nodes)
Sep 09 08:30:09.150 E ns/e2e-services-252 pod/affinity-clusterip-timeout-w9fc2 node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip-timeout container exited with code 137 (Error): 
Sep 09 08:30:09.873 W ns/e2e-services-252 pod/affinity-clusterip-timeout-7bs6z node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:30:09.873 W ns/e2e-services-252 pod/affinity-clusterip-timeout-7bs6z node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip-timeout reason/NotReady
Sep 09 08:30:10.453 W ns/e2e-services-252 pod/affinity-clusterip-timeout-j28fn node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:30:10.453 W ns/e2e-services-252 pod/affinity-clusterip-timeout-j28fn node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip-timeout reason/NotReady
Sep 09 08:30:11.407 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (20 times)
Sep 09 08:30:11.465 I ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr container/kuryr-cni reason/Killing
Sep 09 08:30:11.489 W ns/openshift-operator-lifecycle-manager pod/packageserver-6bb6556b69-jpnn8 node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "https://10.128.5.10:5443/healthz": context deadline exceeded (4 times)
Sep 09 08:30:12.140 E ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints container exited with code 255 (Error): _queue.go:68 +0x184\n\ngoroutine 945 [chan receive]:\nk8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000e10de0)\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\ncreated by k8s.io/client-go/util/workqueue.newQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\n\ngoroutine 1066 [chan receive]:\nk8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000b82de0)\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\ncreated by k8s.io/client-go/util/workqueue.newQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\n\ngoroutine 1068 [select]:\nk8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000b83f20)\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\ncreated by k8s.io/client-go/util/workqueue.newDelayingQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\n\ngoroutine 1075 [chan receive]:\nk8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000d08d80)\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\ncreated by k8s.io/client-go/util/workqueue.newQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\n\ngoroutine 1077 [select]:\nk8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000d09da0)\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\ncreated by k8s.io/client-go/util/workqueue.newDelayingQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\n\ngoroutine 1038 [chan receive]:\nk8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000c485a0)\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\ncreated by k8s.io/client-go/util/workqueue.newQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\n\ngoroutine 1139 [select]:\nk8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000e111a0)\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\ncreated by k8s.io/client-go/util/workqueue.newDelayingQueue\n	k8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\n
Sep 09 08:30:12.611 I ns/e2e-pod-network-test-5579 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Ready
Sep 09 08:30:13.192 W ns/e2e-deployment-9061 pod/test-cleanup-controller-7hvkw node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:30:13.226 W ns/e2e-deployment-9061 pod/test-cleanup-deployment-bccdddf9b-mthk4 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:30:13.258 W ns/e2e-deployment-9061 pod/test-cleanup-deployment-bccdddf9b-mthk4 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:30:14.967 W ns/e2e-deployment-9061 pod/test-cleanup-controller-7hvkw node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:30:14.967 W ns/e2e-deployment-9061 pod/test-cleanup-controller-7hvkw node/ostest-5xqm8-worker-0-cbbx9 container/httpd reason/NotReady
Sep 09 08:30:15.030 I ns/e2e-pod-network-test-5579 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/Ready
Sep 09 08:30:15.304 W ns/e2e-prestop-8695 pod/server node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_server_e2e-prestop-8695_080a3682-ea79-4dbc-9733-a65f7a34f380_0(feb118f23d76cc048b1cc6ff03ae643fa5d6eba923abbd482b283a76a5b41416): [e2e-prestop-8695/server:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep 09 08:30:15.332 W ns/e2e-configmap-3358 pod/pod-configmaps-f680087a-45f9-4846-9511-1cae740e0b6e node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-configmaps-f680087a-45f9-4846-9511-1cae740e0b6e_e2e-configmap-3358_04d15abd-8618-47ba-83b0-56a499dc0de7_0(c15e9318488ac5dbc3b28b76b523503b1528c1437595b04f6191d17eda3d358e): [e2e-configmap-3358/pod-configmaps-f680087a-45f9-4846-9511-1cae740e0b6e:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:30:15.488 W ns/e2e-security-context-test-129 pod/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4_e2e-security-context-test-129_06784fe7-5cb7-4b8a-918d-74db12a7679c_0(eacc327ec34bdb341ef623775fff7643cc378981fe16b8401048b11b10ddee1d): [e2e-security-context-test-129/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:30:16.203 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/NotReady
Sep 09 08:30:16.203 W ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/Restarted
Sep 09 08:30:16.429 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr container/kuryr-cni reason/NotReady
Sep 09 08:30:16.429 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr container/kuryr-cni reason/Restarted
Sep 09 08:30:17.437 I ns/e2e-pod-network-test-5579 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 08:30:18.540 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver\" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: unknown reason" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver\" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is terminated: Error: _queue.go:68 +0x184\nStaticPodsDegraded: \nStaticPodsDegraded: goroutine 945 [chan receive]:\nStaticPodsDegraded: k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000e10de0)\nStaticPodsDegraded: \tk8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\nStaticPodsDegraded: created by k8s.io/client-go/util/workqueue.newQueue\nStaticPodsDegraded: \tk8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\nStaticPodsDegraded: goroutine 1066 [chan receive]:\nStaticPodsDegraded: k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000b82de0)\nStaticPodsDegraded: goroutine 1068 [select]:\nStaticPodsDegraded: k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000b83f20)\nStaticPodsDegraded: \tk8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\nStaticPodsDegraded: created by k8s.io/client-go/util/workqueue.newDelayingQueue\nStaticPodsDegraded: \tk8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\nStaticPodsDegraded: goroutine 1075 [chan receive]:\nStaticPodsDegraded: k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000d08d80)\nStaticPodsDegraded: goroutine 1077 [select]:\nStaticPodsDegraded: k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000d09da0)\nStaticPodsDegraded: goroutine 1038 [chan receive]:\nStaticPodsDegraded: k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000c485a0)\nStaticPodsDegraded: goroutine 1139 [select]:\nStaticPodsDegraded: k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000e111a0)"
Sep 09 08:30:18.968 - 45s   W ns/e2e-services-252 pod/affinity-clusterip-timeout-7bs6z node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:30:18.968 - 45s   W ns/e2e-services-252 pod/execpod-affinityxwmtc node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:30:18.968 - 45s   W ns/e2e-services-252 pod/affinity-clusterip-timeout-j28fn node/ostest-5xqm8-worker-0-twrlr pod has been pending longer than a minute
Sep 09 08:30:19.422 I ns/e2e-pod-network-test-5579 pod/test-container-pod node/ reason/Created
Sep 09 08:30:19.550 I ns/e2e-pod-network-test-5579 pod/host-test-container-pod node/ reason/Created
Sep 09 08:30:19.571 I ns/e2e-pod-network-test-5579 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:30:19.692 I ns/e2e-pod-network-test-5579 pod/host-test-container-pod node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:30:20.631 I ns/e2e-pod-network-test-5579 pod/host-test-container-pod node/ostest-5xqm8-worker-0-rzx47 container/agnhost reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:30:20.821 I ns/e2e-pod-network-test-5579 pod/host-test-container-pod node/ostest-5xqm8-worker-0-rzx47 container/agnhost reason/Created
Sep 09 08:30:20.870 I ns/e2e-pod-network-test-5579 pod/host-test-container-pod node/ostest-5xqm8-worker-0-rzx47 container/agnhost reason/Started
Sep 09 08:30:21.216 I ns/e2e-pod-network-test-5579 pod/host-test-container-pod node/ostest-5xqm8-worker-0-rzx47 container/agnhost reason/Ready
Sep 09 08:30:24.611 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver\" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is terminated: Error: _queue.go:68 +0x184\nStaticPodsDegraded: \nStaticPodsDegraded: goroutine 945 [chan receive]:\nStaticPodsDegraded: k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000e10de0)\nStaticPodsDegraded: \tk8s.io/client-go@v0.19.0/util/workqueue/queue.go:198 +0xac\nStaticPodsDegraded: created by k8s.io/client-go/util/workqueue.newQueue\nStaticPodsDegraded: \tk8s.io/client-go@v0.19.0/util/workqueue/queue.go:58 +0x132\nStaticPodsDegraded: goroutine 1066 [chan receive]:\nStaticPodsDegraded: k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000b82de0)\nStaticPodsDegraded: goroutine 1068 [select]:\nStaticPodsDegraded: k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000b83f20)\nStaticPodsDegraded: \tk8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:231 +0x3f8\nStaticPodsDegraded: created by k8s.io/client-go/util/workqueue.newDelayingQueue\nStaticPodsDegraded: \tk8s.io/client-go@v0.19.0/util/workqueue/delaying_queue.go:68 +0x184\nStaticPodsDegraded: goroutine 1075 [chan receive]:\nStaticPodsDegraded: k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000d08d80)\nStaticPodsDegraded: goroutine 1077 [select]:\nStaticPodsDegraded: k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000d09da0)\nStaticPodsDegraded: goroutine 1038 [chan receive]:\nStaticPodsDegraded: k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000c485a0)\nStaticPodsDegraded: goroutine 1139 [select]:\nStaticPodsDegraded: k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000e111a0)" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver\" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: CrashLoopBackOff: back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)"
Sep 09 08:30:25.329 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": dial tcp 10.196.3.65:8091: connect: connection refused (49 times)
Sep 09 08:30:28.413 I ns/e2e-container-lifecycle-hook-350 pod/pod-with-poststart-exec-hook reason/AddedInterface Add eth0 [10.128.121.8/23]
Sep 09 08:30:29.172 I ns/e2e-container-lifecycle-hook-350 pod/pod-with-poststart-exec-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-poststart-exec-hook reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:30:29.472 I ns/e2e-container-lifecycle-hook-350 pod/pod-with-poststart-exec-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-poststart-exec-hook reason/Created
Sep 09 08:30:29.557 I ns/e2e-container-lifecycle-hook-350 pod/pod-with-poststart-exec-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-poststart-exec-hook reason/Started
Sep 09 08:30:30.171 I ns/e2e-container-lifecycle-hook-350 pod/pod-with-poststart-exec-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-poststart-exec-hook reason/Ready
Sep 09 08:30:30.995 I ns/e2e-prestop-8695 pod/server reason/AddedInterface Add eth0 [10.128.199.181/23]
Sep 09 08:30:31.667 I ns/e2e-prestop-8695 pod/server node/ostest-5xqm8-worker-0-rzx47 container/server reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:30:31.770 I ns/openshift-kuryr pod/kuryr-cni-kzsdq node/ostest-5xqm8-worker-0-rzx47 container/kuryr-cni reason/Ready
Sep 09 08:30:31.940 I ns/e2e-prestop-8695 pod/server node/ostest-5xqm8-worker-0-rzx47 container/server reason/Created
Sep 09 08:30:31.984 I ns/e2e-prestop-8695 pod/server node/ostest-5xqm8-worker-0-rzx47 container/server reason/Started
Sep 09 08:30:32.139 W ns/e2e-container-lifecycle-hook-350 pod/pod-with-poststart-exec-hook node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 15s
Sep 09 08:30:32.155 I ns/e2e-container-lifecycle-hook-350 pod/pod-with-poststart-exec-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-poststart-exec-hook reason/Killing
Sep 09 08:30:32.225 I ns/e2e-prestop-8695 pod/server node/ostest-5xqm8-worker-0-rzx47 container/server reason/Ready
Sep 09 08:30:32.336 I ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr container/kuryr-cni reason/Ready
Sep 09 08:30:32.861 I ns/e2e-prestop-8695 pod/tester node/ reason/Created
Sep 09 08:30:32.979 I ns/e2e-prestop-8695 pod/tester node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:30:33.969 - 60s   W ns/e2e-deployment-9061 pod/test-cleanup-controller-7hvkw node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:30:34.006 W ns/e2e-container-lifecycle-hook-350 pod/pod-with-poststart-exec-hook node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:30:34.006 W ns/e2e-container-lifecycle-hook-350 pod/pod-with-poststart-exec-hook node/ostest-5xqm8-worker-0-cbbx9 container/pod-with-poststart-exec-hook reason/NotReady
Sep 09 08:30:48.968 - 14s   W ns/e2e-container-lifecycle-hook-350 pod/pod-with-poststart-exec-hook node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:31:01.551 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver reason/Restarted
Sep 09 08:31:03.968 - 30s   W ns/e2e-security-context-test-129 pod/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:31:05.708 W ns/e2e-services-252 pod/execpod-affinityxwmtc node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:31:05.732 W ns/e2e-services-252 pod/affinity-clusterip-timeout-w9fc2 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:31:06.267 W ns/e2e-services-252 pod/affinity-clusterip-timeout-j28fn node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:31:06.281 W ns/e2e-services-252 pod/affinity-clusterip-timeout-7bs6z node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 04:31:06.623 I test="[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" failed
Sep 09 04:31:06.626 - 235s  I test="[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" running
Sep 09 08:31:07.690 I ns/e2e-pod-network-test-5579 pod/test-container-pod reason/AddedInterface Add eth0 [10.128.193.43/23]
Sep 09 08:31:07.849 I ns/e2e-services-458 pod/affinity-clusterip-transition-f8v2l node/ reason/Created
Sep 09 08:31:07.860 I ns/e2e-services-458 replicationcontroller/affinity-clusterip-transition reason/SuccessfulCreate Created pod: affinity-clusterip-transition-f8v2l
Sep 09 08:31:07.910 I ns/e2e-services-458 pod/affinity-clusterip-transition-f8v2l node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:31:07.939 I ns/e2e-services-458 pod/affinity-clusterip-transition-7rkcx node/ reason/Created
Sep 09 08:31:07.974 I ns/e2e-services-458 pod/affinity-clusterip-transition-ls2rh node/ reason/Created
Sep 09 08:31:08.023 I ns/e2e-services-458 pod/affinity-clusterip-transition-7rkcx node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:31:08.031 I ns/e2e-services-458 replicationcontroller/affinity-clusterip-transition reason/SuccessfulCreate Created pod: affinity-clusterip-transition-ls2rh
Sep 09 08:31:08.055 I ns/e2e-services-458 replicationcontroller/affinity-clusterip-transition reason/SuccessfulCreate Created pod: affinity-clusterip-transition-7rkcx
Sep 09 08:31:08.109 I ns/e2e-services-458 pod/affinity-clusterip-transition-ls2rh node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:31:08.459 I ns/e2e-pod-network-test-5579 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:31:08.661 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c_e2e-dns-7657_57d869aa-6d80-45a2-b9da-717b3303b6ad_0(b597bb3bceb978c2e5365ce5819cf8f22630a5bca9b2a4cf2af4737131ab7710): netplugin failed: "2020/09/09 08:28:27 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-dns-7657;K8S_POD_NAME=dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c;K8S_POD_INFRA_CONTAINER_ID=b597bb3bceb978c2e5365ce5819cf8f22630a5bca9b2a4cf2af4737131ab7710, CNI_NETNS=/var/run/netns/a42e4dcf-14d2-408b-8416-8b323864054c).\n"
Sep 09 08:31:08.840 I ns/e2e-pod-network-test-5579 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:31:08.940 I ns/e2e-pod-network-test-5579 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:31:09.215 I ns/e2e-pod-network-test-5579 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 08:31:11.820 W ns/e2e-container-lifecycle-hook-350 pod/pod-with-poststart-exec-hook node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:31:12.707 E clusteroperator/authentication changed Degraded to True: WellKnownReadyController_SyncError: WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2
Sep 09 08:31:12.822 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Degraded changed from False to True ("WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2")
Sep 09 08:31:14.763 I ns/e2e-job-3562 pod/foo-42j4q node/ reason/Created
Sep 09 08:31:14.804 I ns/e2e-job-3562 job/foo reason/SuccessfulCreate Created pod: foo-42j4q
Sep 09 08:31:14.845 I ns/e2e-job-3562 pod/foo-42j4q node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:31:14.856 I ns/e2e-job-3562 pod/foo-wqldk node/ reason/Created
Sep 09 08:31:14.882 I ns/e2e-job-3562 job/foo reason/SuccessfulCreate Created pod: foo-wqldk
Sep 09 08:31:14.966 I ns/e2e-job-3562 pod/foo-wqldk node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:31:15.663 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ reason/Created
Sep 09 08:31:15.803 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:31:16.163 I ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Ready
Sep 09 08:31:16.828 W clusteroperator/network changed Progressing to False
Sep 09 08:31:16.969 W ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr reason/FailedMount MountVolume.SetUp failed for volume "default-token-w8p5m" : failed to sync secret cache: timed out waiting for the condition
Sep 09 08:31:17.399 I ns/openshift-apiserver deployment/apiserver reason/ConnectivityRestored Connectivity restored after 1m58.441361156s: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: tcp connection to 10.196.3.65:6443 succeeded
Sep 09 08:31:17.654 I ns/openshift-apiserver deployment/apiserver reason/ConnectivityRestored Connectivity restored after 1m58.579857474s: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: tcp connection to 10.196.3.65:6443 succeeded
Sep 09 08:31:17.730 I ns/openshift-apiserver deployment/apiserver reason/ConnectivityRestored Connectivity restored after 1m58.106413613s: kubernetes-apiserver-endpoint-ostest-5xqm8-master-1: tcp connection to 10.196.3.65:6443 succeeded
Sep 09 08:31:18.968 - 29s   W ns/e2e-configmap-3358 pod/pod-configmaps-f680087a-45f9-4846-9511-1cae740e0b6e node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:31:18.973 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 reason/NonGracefulTermination Previous pod did not terminate gracefully: 2020-09-09 08:27:14.816738261 +0000 UTC
Sep 09 08:31:21.601 W clusteroperator/authentication changed Available to True: AsExpected: OAuthServerDeploymentAvailable: availableReplicas==2
Sep 09 08:31:21.802 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Available changed from False to True ("OAuthServerDeploymentAvailable: availableReplicas==2") (13 times)
Sep 09 08:31:21.876 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Available changed from False to True ("OAuthServerDeploymentAvailable: availableReplicas==2") (14 times)
Sep 09 08:31:21.967 W clusteroperator/authentication changed Degraded to False
Sep 09 08:31:22.076 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Degraded changed from True to False ("") (2 times)
Sep 09 08:31:22.315 I ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver reason/Ready
Sep 09 08:31:25.019 W ns/e2e-container-lifecycle-hook-350 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:31:25.744 W ns/e2e-pod-network-test-5579 pod/host-test-container-pod node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:31:25.757 W ns/e2e-pod-network-test-5579 pod/host-test-container-pod node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:31:25.772 W ns/e2e-pod-network-test-5579 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:31:25.835 W ns/e2e-pod-network-test-5579 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:31:25.862 W ns/e2e-pod-network-test-5579 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:31:25.862 W ns/e2e-pod-network-test-5579 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:31:26.538 W ns/e2e-container-lifecycle-hook-350 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:31:26.538 W ns/e2e-container-lifecycle-hook-350 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 container/pod-handle-http-request reason/NotReady
Sep 09 08:31:27.439 W ns/e2e-pod-network-test-5579 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:31:27.439 W ns/e2e-pod-network-test-5579 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/NotReady
Sep 09 08:31:27.699 W ns/e2e-pod-network-test-5579 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:31:27.699 W ns/e2e-pod-network-test-5579 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/NotReady
Sep 09 08:31:27.826 W ns/e2e-pod-network-test-5579 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:31:27.826 W ns/e2e-pod-network-test-5579 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 container/webserver reason/NotReady
Sep 09 08:31:28.028 W ns/e2e-pod-network-test-5579 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:31:28.028 W ns/e2e-pod-network-test-5579 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr container/webserver reason/NotReady
Sep 09 08:31:28.133 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver\" is not ready: unknown reason\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: CrashLoopBackOff: back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: CrashLoopBackOff: back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)"
Sep 09 08:31:29.657 W ns/e2e-pod-network-test-5579 pod/test-container-pod node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:31:29.698 W ns/e2e-pod-network-test-5579 pod/netserver-2 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:31:29.734 W ns/e2e-pod-network-test-5579 pod/netserver-1 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:31:29.783 W ns/e2e-pod-network-test-5579 pod/netserver-0 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:31:33.437 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c_e2e-dns-7657_57d869aa-6d80-45a2-b9da-717b3303b6ad_0(ead3436b542de6dbdfdca50b3b9fec22cd367db5def72faae61a5be9fe1d1d53): [e2e-dns-7657/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:31:33.969 W ns/e2e-container-lifecycle-hook-350 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:31:33.969 - 14s   W ns/e2e-prestop-8695 pod/tester node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:31:36.087 I ns/e2e-projected-6457 pod/labelsupdateae92dd5c-092f-4fd9-9928-d8b0d3248f17 node/ reason/Created
Sep 09 08:31:36.252 I ns/e2e-projected-6457 pod/labelsupdateae92dd5c-092f-4fd9-9928-d8b0d3248f17 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:31:37.424 W ns/e2e-container-lifecycle-hook-350 pod/pod-handle-http-request node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:31:37.626 I ns/e2e-services-458 pod/affinity-clusterip-transition-ls2rh reason/AddedInterface Add eth0 [10.128.202.29/23]
Sep 09 08:31:38.347 I ns/e2e-services-458 pod/affinity-clusterip-transition-ls2rh node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip-transition reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:31:38.489 W ns/e2e-deployment-9061 pod/test-cleanup-controller-7hvkw node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:31:39.175 I ns/e2e-services-458 pod/affinity-clusterip-transition-ls2rh node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip-transition reason/Created
Sep 09 08:31:39.259 I ns/e2e-services-458 pod/affinity-clusterip-transition-ls2rh node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip-transition reason/Started
Sep 09 08:31:40.018 I ns/e2e-services-458 pod/affinity-clusterip-transition-ls2rh node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip-transition reason/Ready
Sep 09 08:31:41.998 W ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints reason/Restarted
Sep 09 08:31:45.058 I ns/e2e-security-context-test-129 pod/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4 reason/AddedInterface Add eth0 [10.128.131.244/23]
Sep 09 08:31:45.677 I ns/e2e-security-context-test-129 pod/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4 node/ostest-5xqm8-worker-0-rzx47 container/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:31:45.990 I ns/e2e-security-context-test-129 pod/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4 node/ostest-5xqm8-worker-0-rzx47 container/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4 reason/Created
Sep 09 08:31:46.081 I ns/e2e-security-context-test-129 pod/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4 node/ostest-5xqm8-worker-0-rzx47 container/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4 reason/Started
Sep 09 08:31:46.162 I ns/e2e-job-3562 pod/foo-wqldk reason/AddedInterface Add eth0 [10.128.145.70/23]
Sep 09 08:31:46.922 I ns/e2e-job-3562 pod/foo-42j4q reason/AddedInterface Add eth0 [10.128.145.191/23]
Sep 09 08:31:47.151 I ns/e2e-job-3562 pod/foo-wqldk node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:31:47.337 I ns/e2e-job-3562 pod/foo-wqldk node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Created
Sep 09 08:31:47.364 I ns/e2e-configmap-3358 pod/pod-configmaps-f680087a-45f9-4846-9511-1cae740e0b6e reason/AddedInterface Add eth0 [10.128.173.146/23]
Sep 09 08:31:47.431 I ns/e2e-job-3562 pod/foo-wqldk node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Started
Sep 09 08:31:47.619 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: CrashLoopBackOff: back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: unknown reason"
Sep 09 08:31:47.683 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: CrashLoopBackOff: back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ostest-5xqm8-master-1_openshift-kube-apiserver(634c8d10601da01ae8b1110ae8b4f01f)" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: unknown reason" (2 times)
Sep 09 08:31:47.728 I ns/e2e-job-3562 pod/foo-42j4q node/ostest-5xqm8-worker-0-rzx47 container/c reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:31:48.014 I ns/e2e-job-3562 pod/foo-42j4q node/ostest-5xqm8-worker-0-rzx47 container/c reason/Created
Sep 09 08:31:48.058 I ns/e2e-job-3562 pod/foo-42j4q node/ostest-5xqm8-worker-0-rzx47 container/c reason/Started
Sep 09 08:31:48.256 I ns/e2e-configmap-3358 pod/pod-configmaps-f680087a-45f9-4846-9511-1cae740e0b6e node/ostest-5xqm8-worker-0-rzx47 container/configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:31:48.336 I ns/e2e-job-3562 pod/foo-wqldk node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Ready
Sep 09 08:31:48.562 I ns/e2e-configmap-3358 pod/pod-configmaps-f680087a-45f9-4846-9511-1cae740e0b6e node/ostest-5xqm8-worker-0-rzx47 container/configmap-volume-test reason/Created
Sep 09 08:31:48.599 I ns/e2e-job-3562 pod/foo-42j4q node/ostest-5xqm8-worker-0-rzx47 container/c reason/Ready
Sep 09 08:31:48.644 I ns/e2e-configmap-3358 pod/pod-configmaps-f680087a-45f9-4846-9511-1cae740e0b6e node/ostest-5xqm8-worker-0-rzx47 container/configmap-volume-test reason/Started
Sep 09 08:31:49.012 W ns/e2e-job-3562 pod/foo-42j4q node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:31:49.025 W ns/e2e-job-3562 pod/foo-wqldk node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:31:49.206 I ns/e2e-services-458 pod/affinity-clusterip-transition-7rkcx reason/AddedInterface Add eth0 [10.128.202.240/23]
Sep 09 08:31:49.553 I ns/e2e-events-1932 pod/send-events-0284b4fb-bcca-4b53-b13e-384a2bdd7849 node/ reason/Created
Sep 09 08:31:49.648 I ns/e2e-events-1932 pod/send-events-0284b4fb-bcca-4b53-b13e-384a2bdd7849 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:31:49.937 I ns/e2e-services-458 pod/affinity-clusterip-transition-7rkcx node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip-transition reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:31:50.244 I ns/e2e-services-458 pod/affinity-clusterip-transition-7rkcx node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip-transition reason/Created
Sep 09 08:31:50.271 I ns/e2e-services-458 pod/affinity-clusterip-transition-7rkcx node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip-transition reason/Started
Sep 09 08:31:50.338 I ns/e2e-job-3562 pod/foo-wqldk node/ostest-5xqm8-worker-0-cbbx9 container/c reason/Killing
Sep 09 08:31:50.367 I ns/e2e-services-458 pod/affinity-clusterip-transition-7rkcx node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip-transition reason/Ready
Sep 09 08:31:50.405 W ns/e2e-configmap-3358 pod/pod-configmaps-f680087a-45f9-4846-9511-1cae740e0b6e node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:31:50.601 I ns/e2e-job-3562 pod/foo-42j4q node/ostest-5xqm8-worker-0-rzx47 container/c reason/Killing
Sep 09 08:31:53.845 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f reason/AddedInterface Add eth0 [10.128.140.6/23]
Sep 09 08:31:54.648 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:31:54.959 W ns/e2e-configmap-3358 pod/pod-configmaps-f680087a-45f9-4846-9511-1cae740e0b6e node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:31:55.034 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Created
Sep 09 08:31:55.080 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Started
Sep 09 08:31:55.107 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/querier reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:31:55.172 I ns/e2e-prestop-8695 pod/tester reason/AddedInterface Add eth0 [10.128.198.177/23]
Sep 09 08:31:55.406 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/querier reason/Created
Sep 09 08:31:55.481 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/querier reason/Started
Sep 09 08:31:55.499 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/Pulled image/gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0
Sep 09 08:31:55.810 I ns/e2e-prestop-8695 pod/tester node/ostest-5xqm8-worker-0-cbbx9 container/tester reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:31:55.894 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/Created
Sep 09 08:31:55.967 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/Started
Sep 09 08:31:55.969 I ns/e2e-services-458 pod/affinity-clusterip-transition-f8v2l reason/AddedInterface Add eth0 [10.128.203.149/23]
Sep 09 08:31:56.141 I ns/e2e-prestop-8695 pod/tester node/ostest-5xqm8-worker-0-cbbx9 container/tester reason/Created
Sep 09 08:31:56.210 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/Ready
Sep 09 08:31:56.210 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/querier reason/Ready
Sep 09 08:31:56.210 I ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/webserver reason/Ready
Sep 09 08:31:56.226 I ns/e2e-prestop-8695 pod/tester node/ostest-5xqm8-worker-0-cbbx9 container/tester reason/Started
Sep 09 08:31:56.391 I ns/e2e-prestop-8695 pod/tester node/ostest-5xqm8-worker-0-cbbx9 container/tester reason/Ready
Sep 09 08:31:56.453 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c_e2e-dns-7657_57d869aa-6d80-45a2-b9da-717b3303b6ad_0(d797e004f4313b03e2dbb32e27c8c063bba9b5e5552dfae94c9aca53699dec9e): [e2e-dns-7657/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:31:56.613 I ns/e2e-services-458 pod/affinity-clusterip-transition-f8v2l node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip-transition reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:31:56.852 I ns/e2e-services-458 pod/affinity-clusterip-transition-f8v2l node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip-transition reason/Created
Sep 09 08:31:56.903 I ns/e2e-services-458 pod/affinity-clusterip-transition-f8v2l node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip-transition reason/Started
Sep 09 08:31:56.927 W ns/e2e-prestop-8695 pod/tester node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:31:57.718 I ns/e2e-services-458 pod/affinity-clusterip-transition-f8v2l node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip-transition reason/Ready
Sep 09 08:31:57.839 W ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:31:58.213 I ns/e2e-emptydir-3337 pod/pod-006cf7a6-e6d5-4b78-99be-5b6f2cdc8d83 node/ reason/Created
Sep 09 08:31:58.283 I ns/e2e-emptydir-3337 pod/pod-006cf7a6-e6d5-4b78-99be-5b6f2cdc8d83 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:31:58.358 I ns/e2e-prestop-8695 pod/tester node/ostest-5xqm8-worker-0-cbbx9 container/tester reason/Killing
Sep 09 08:31:58.731 I ns/openshift-kube-apiserver pod/kube-apiserver-ostest-5xqm8-master-1 node/ostest-5xqm8-master-1 container/kube-apiserver-check-endpoints reason/Ready
Sep 09 08:31:58.955 I ns/e2e-services-458 pod/execpod-affinityfngdd node/ reason/Created
Sep 09 08:31:59.026 I ns/e2e-services-458 pod/execpod-affinityfngdd node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:32:01.763 W ns/openshift-operator-lifecycle-manager pod/packageserver-6bb6556b69-jpnn8 node/ostest-5xqm8-master-0 reason/Unhealthy Readiness probe failed: Get "https://10.128.5.10:5443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) (4 times)
Sep 09 08:32:01.980 W ns/e2e-prestop-8695 pod/server node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:32:02.035 I ns/e2e-prestop-8695 pod/server node/ostest-5xqm8-worker-0-rzx47 container/server reason/Killing
Sep 09 08:32:02.262 W ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:32:02.262 W ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/jessie-querier reason/NotReady
Sep 09 08:32:02.262 W ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/webserver reason/NotReady
Sep 09 08:32:02.262 W ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr container/querier reason/NotReady
Sep 09 08:32:03.382 I ns/e2e-kubectl-8748 pod/e2e-test-httpd-pod node/ reason/Created
Sep 09 08:32:03.451 I ns/e2e-kubectl-8748 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:32:03.607 I ns/e2e-security-context-test-6781 pod/busybox-readonly-false-11755724-7b4e-44ca-a1cb-d02ac9243dd7 node/ reason/Created
Sep 09 08:32:03.663 W ns/e2e-dns-8022 pod/dns-test-0958b575-03fe-483d-876c-67f0b2d8364f node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:32:03.706 W ns/e2e-prestop-8695 pod/server node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:32:03.706 W ns/e2e-prestop-8695 pod/server node/ostest-5xqm8-worker-0-rzx47 container/server reason/NotReady
Sep 09 08:32:03.720 I ns/e2e-security-context-test-6781 pod/busybox-readonly-false-11755724-7b4e-44ca-a1cb-d02ac9243dd7 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:32:03.860 W ns/e2e-security-context-test-129 pod/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:32:03.968 W ns/e2e-prestop-8695 pod/server node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:32:03.989 W ns/openshift-kuryr pod/kuryr-cni-7sd9x node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "http://10.196.2.196:8090/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (4 times)
Sep 09 08:32:05.706 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ostest-5xqm8-master-1 container \"kube-apiserver-check-endpoints\" is not ready: unknown reason" to "NodeControllerDegraded: All master nodes are ready" (2 times)
Sep 09 08:32:07.954 W ns/e2e-prestop-8695 pod/server node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:32:08.284 I ns/e2e-services-458 pod/execpod-affinityfngdd reason/AddedInterface Add eth0 [10.128.202.165/23]
Sep 09 08:32:08.779 W ns/e2e-security-context-test-129 pod/busybox-privileged-false-2f7cfc4c-ce6a-43b0-92b8-c3d4227de6b4 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:32:08.975 I ns/e2e-services-458 pod/execpod-affinityfngdd node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:32:09.266 I ns/e2e-services-458 pod/execpod-affinityfngdd node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/Created
Sep 09 08:32:09.318 I ns/e2e-services-458 pod/execpod-affinityfngdd node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/Started
Sep 09 08:32:09.488 I ns/e2e-services-458 pod/execpod-affinityfngdd node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/Ready
Sep 09 08:32:10.251 I ns/e2e-projected-2197 pod/pod-projected-configmaps-d57c4c3b-3e3d-4096-bb28-466df0e2bb0a node/ reason/Created
Sep 09 08:32:10.314 I ns/e2e-projected-2197 pod/pod-projected-configmaps-d57c4c3b-3e3d-4096-bb28-466df0e2bb0a node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:32:18.659 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c_e2e-dns-7657_57d869aa-6d80-45a2-b9da-717b3303b6ad_0(481bbc57f0df35b6ac6d9506a40c9de96698ccbc6c32090e949ea8f021535ae7): [e2e-dns-7657/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:32:21.495 E ns/e2e-job-3562 pod/foo-wqldk node/ostest-5xqm8-worker-0-cbbx9 container/c init container exited with code 137 (Error): 
Sep 09 08:32:21.495 E ns/e2e-job-3562 pod/foo-wqldk node/ostest-5xqm8-worker-0-cbbx9 reason/Failed (): 
Sep 09 08:32:21.495 E ns/e2e-job-3562 pod/foo-wqldk node/ostest-5xqm8-worker-0-cbbx9 container/c container exited with code 137 (Error): 
Sep 09 08:32:21.801 E ns/e2e-job-3562 pod/foo-42j4q node/ostest-5xqm8-worker-0-rzx47 container/c init container exited with code 137 (Error): 
Sep 09 08:32:21.801 E ns/e2e-job-3562 pod/foo-42j4q node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 08:32:21.801 E ns/e2e-job-3562 pod/foo-42j4q node/ostest-5xqm8-worker-0-rzx47 container/c container exited with code 137 (Error): 
Sep 09 08:32:26.831 W ns/e2e-job-3562 pod/foo-wqldk node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:32:27.266 W ns/e2e-job-3562 pod/foo-42j4q node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:32:28.785 I ns/e2e-projected-2728 pod/downwardapi-volume-1ea9e210-e4be-4c1a-86f9-83c51cf90e0a node/ reason/Created
Sep 09 08:32:28.859 I ns/e2e-projected-2728 pod/downwardapi-volume-1ea9e210-e4be-4c1a-86f9-83c51cf90e0a node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:32:29.402 I ns/e2e-projected-6457 pod/labelsupdateae92dd5c-092f-4fd9-9928-d8b0d3248f17 reason/AddedInterface Add eth0 [10.128.147.94/23]
Sep 09 08:32:29.621 E ns/e2e-prestop-8695 pod/tester node/ostest-5xqm8-worker-0-cbbx9 container/tester container exited with code 137 (Error): 
Sep 09 08:32:30.132 I ns/e2e-projected-6457 pod/labelsupdateae92dd5c-092f-4fd9-9928-d8b0d3248f17 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:32:30.394 I ns/e2e-projected-6457 pod/labelsupdateae92dd5c-092f-4fd9-9928-d8b0d3248f17 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Created
Sep 09 08:32:30.447 I ns/e2e-projected-6457 pod/labelsupdateae92dd5c-092f-4fd9-9928-d8b0d3248f17 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Started
Sep 09 08:32:30.880 I ns/e2e-projected-6457 pod/labelsupdateae92dd5c-092f-4fd9-9928-d8b0d3248f17 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/Ready
Sep 09 08:32:36.075 I ns/e2e-pods-5366 pod/pod-qos-class-1ca75c28-79d4-4e89-8669-606a1d59952f node/ reason/Created
Sep 09 08:32:36.942 W ns/e2e-prestop-8695 pod/tester node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:32:40.781 I ns/e2e-security-context-test-6781 pod/busybox-readonly-false-11755724-7b4e-44ca-a1cb-d02ac9243dd7 reason/AddedInterface Add eth0 [10.128.149.203/23]
Sep 09 08:32:41.469 I ns/e2e-security-context-test-6781 pod/busybox-readonly-false-11755724-7b4e-44ca-a1cb-d02ac9243dd7 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-readonly-false-11755724-7b4e-44ca-a1cb-d02ac9243dd7 reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:32:41.644 I ns/e2e-kubectl-8748 pod/e2e-test-httpd-pod reason/AddedInterface Add eth0 [10.128.170.207/23]
Sep 09 08:32:41.696 I ns/e2e-emptydir-3337 pod/pod-006cf7a6-e6d5-4b78-99be-5b6f2cdc8d83 reason/AddedInterface Add eth0 [10.128.121.82/23]
Sep 09 08:32:41.807 I ns/e2e-security-context-test-6781 pod/busybox-readonly-false-11755724-7b4e-44ca-a1cb-d02ac9243dd7 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-readonly-false-11755724-7b4e-44ca-a1cb-d02ac9243dd7 reason/Created
Sep 09 08:32:41.929 I ns/e2e-security-context-test-6781 pod/busybox-readonly-false-11755724-7b4e-44ca-a1cb-d02ac9243dd7 node/ostest-5xqm8-worker-0-cbbx9 container/busybox-readonly-false-11755724-7b4e-44ca-a1cb-d02ac9243dd7 reason/Started
Sep 09 08:32:42.311 I ns/e2e-kubectl-8748 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 container/e2e-test-httpd-pod reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:32:42.360 I ns/e2e-emptydir-3337 pod/pod-006cf7a6-e6d5-4b78-99be-5b6f2cdc8d83 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:32:42.361 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (64 times)
Sep 09 08:32:42.526 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c_e2e-dns-7657_57d869aa-6d80-45a2-b9da-717b3303b6ad_0(163ead3904f89d8bc1c8cbdcaf04674140f0a122171927c5849dbed431fbf12d): [e2e-dns-7657/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:32:42.633 I ns/e2e-kubectl-8748 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 container/e2e-test-httpd-pod reason/Created
Sep 09 08:32:42.681 I ns/e2e-emptydir-3337 pod/pod-006cf7a6-e6d5-4b78-99be-5b6f2cdc8d83 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:32:42.723 I ns/e2e-kubectl-8748 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 container/e2e-test-httpd-pod reason/Started
Sep 09 08:32:42.760 I ns/e2e-emptydir-3337 pod/pod-006cf7a6-e6d5-4b78-99be-5b6f2cdc8d83 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:32:42.865 I ns/e2e-emptydir-3337 pod/pod-006cf7a6-e6d5-4b78-99be-5b6f2cdc8d83 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Ready
Sep 09 08:32:42.925 I ns/e2e-kubectl-8748 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 container/e2e-test-httpd-pod reason/Ready
Sep 09 08:32:43.787 I ns/e2e-events-1932 pod/send-events-0284b4fb-bcca-4b53-b13e-384a2bdd7849 reason/AddedInterface Add eth0 [10.128.168.99/23]
Sep 09 08:32:44.454 I ns/e2e-events-1932 pod/send-events-0284b4fb-bcca-4b53-b13e-384a2bdd7849 node/ostest-5xqm8-worker-0-rzx47 container/p reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:32:44.763 W ns/e2e-kubectl-8748 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:32:44.809 I ns/e2e-kubectl-8748 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 container/e2e-test-httpd-pod reason/Killing
Sep 09 08:32:44.821 I ns/e2e-events-1932 pod/send-events-0284b4fb-bcca-4b53-b13e-384a2bdd7849 node/ostest-5xqm8-worker-0-rzx47 container/p reason/Created
Sep 09 08:32:44.889 I ns/e2e-events-1932 pod/send-events-0284b4fb-bcca-4b53-b13e-384a2bdd7849 node/ostest-5xqm8-worker-0-rzx47 container/p reason/Started
Sep 09 08:32:45.086 I ns/e2e-kubectl-8748 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 container/e2e-test-httpd-pod reason/Pulled image/docker.io/library/busybox:1.29
Sep 09 08:32:45.270 W ns/e2e-emptydir-3337 pod/pod-006cf7a6-e6d5-4b78-99be-5b6f2cdc8d83 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:32:45.373 I ns/e2e-kubectl-8748 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 container/e2e-test-httpd-pod reason/Created
Sep 09 08:32:45.389 W ns/e2e-pods-5366 pod/pod-qos-class-1ca75c28-79d4-4e89-8669-606a1d59952f node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:32:45.480 I ns/e2e-kubectl-8748 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 container/e2e-test-httpd-pod reason/Started
Sep 09 08:32:45.863 W ns/e2e-kubectl-8748 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 container/e2e-test-httpd-pod reason/Restarted
Sep 09 08:32:45.963 I ns/e2e-events-1932 pod/send-events-0284b4fb-bcca-4b53-b13e-384a2bdd7849 node/ostest-5xqm8-worker-0-rzx47 container/p reason/Ready
Sep 09 08:32:45.990 W ns/e2e-projected-6457 pod/labelsupdateae92dd5c-092f-4fd9-9928-d8b0d3248f17 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:32:47.857 W ns/e2e-projected-6457 pod/labelsupdateae92dd5c-092f-4fd9-9928-d8b0d3248f17 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:32:47.857 W ns/e2e-projected-6457 pod/labelsupdateae92dd5c-092f-4fd9-9928-d8b0d3248f17 node/ostest-5xqm8-worker-0-rzx47 container/client-container reason/NotReady
Sep 09 08:32:48.968 W ns/e2e-projected-6457 pod/labelsupdateae92dd5c-092f-4fd9-9928-d8b0d3248f17 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:32:51.685 W ns/e2e-events-1932 pod/send-events-0284b4fb-bcca-4b53-b13e-384a2bdd7849 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:32:51.718 I ns/e2e-events-1932 pod/send-events-0284b4fb-bcca-4b53-b13e-384a2bdd7849 node/ostest-5xqm8-worker-0-rzx47 container/p reason/Killing
Sep 09 08:32:54.463 W ns/e2e-security-context-test-6781 pod/busybox-readonly-false-11755724-7b4e-44ca-a1cb-d02ac9243dd7 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:32:55.862 W ns/e2e-security-context-test-6781 pod/busybox-readonly-false-11755724-7b4e-44ca-a1cb-d02ac9243dd7 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:32:56.340 W ns/e2e-emptydir-3337 pod/pod-006cf7a6-e6d5-4b78-99be-5b6f2cdc8d83 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:32:57.263 W ns/e2e-projected-6457 pod/labelsupdateae92dd5c-092f-4fd9-9928-d8b0d3248f17 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:32:57.369 I ns/e2e-projected-2197 pod/pod-projected-configmaps-d57c4c3b-3e3d-4096-bb28-466df0e2bb0a reason/AddedInterface Add eth0 [10.128.150.92/23]
Sep 09 08:32:57.385 W ns/e2e-pods-5366 pod/pod-qos-class-1ca75c28-79d4-4e89-8669-606a1d59952f node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:32:57.541 W ns/e2e-kubectl-8748 pod/e2e-test-httpd-pod node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:32:57.990 I ns/e2e-projected-2197 pod/pod-projected-configmaps-d57c4c3b-3e3d-4096-bb28-466df0e2bb0a node/ostest-5xqm8-worker-0-rzx47 container/projected-configmap-volume-test reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:32:58.301 I ns/e2e-projected-2197 pod/pod-projected-configmaps-d57c4c3b-3e3d-4096-bb28-466df0e2bb0a node/ostest-5xqm8-worker-0-rzx47 container/projected-configmap-volume-test reason/Created
Sep 09 08:32:58.408 I ns/e2e-projected-2197 pod/pod-projected-configmaps-d57c4c3b-3e3d-4096-bb28-466df0e2bb0a node/ostest-5xqm8-worker-0-rzx47 container/projected-configmap-volume-test reason/Started
Sep 09 08:32:59.424 W ns/e2e-projected-2197 pod/pod-projected-configmaps-d57c4c3b-3e3d-4096-bb28-466df0e2bb0a node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:33:02.798 W ns/e2e-projected-2197 pod/pod-projected-configmaps-d57c4c3b-3e3d-4096-bb28-466df0e2bb0a node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:33:08.570 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c_e2e-dns-7657_57d869aa-6d80-45a2-b9da-717b3303b6ad_0(8100b303fe3d4c94d078cbbeef36e9e5508ba3cdc50af1c7f25890a69960614f): [e2e-dns-7657/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:33:15.580 I ns/e2e-projected-2728 pod/downwardapi-volume-1ea9e210-e4be-4c1a-86f9-83c51cf90e0a reason/AddedInterface Add eth0 [10.128.123.115/23]
Sep 09 08:33:16.249 I ns/e2e-projected-2728 pod/downwardapi-volume-1ea9e210-e4be-4c1a-86f9-83c51cf90e0a node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:33:16.594 I ns/e2e-projected-2728 pod/downwardapi-volume-1ea9e210-e4be-4c1a-86f9-83c51cf90e0a node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Created
Sep 09 08:33:16.665 I ns/e2e-projected-2728 pod/downwardapi-volume-1ea9e210-e4be-4c1a-86f9-83c51cf90e0a node/ostest-5xqm8-worker-0-twrlr container/client-container reason/Started
Sep 09 08:33:17.827 W ns/e2e-projected-2728 pod/downwardapi-volume-1ea9e210-e4be-4c1a-86f9-83c51cf90e0a node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 0s
Sep 09 08:33:20.737 W ns/e2e-projected-2728 pod/downwardapi-volume-1ea9e210-e4be-4c1a-86f9-83c51cf90e0a node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:33:22.986 W ns/e2e-events-1932 pod/send-events-0284b4fb-bcca-4b53-b13e-384a2bdd7849 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:33:22.986 W ns/e2e-events-1932 pod/send-events-0284b4fb-bcca-4b53-b13e-384a2bdd7849 node/ostest-5xqm8-worker-0-rzx47 container/p reason/NotReady
Sep 09 08:33:23.998 W ns/e2e-events-1932 pod/send-events-0284b4fb-bcca-4b53-b13e-384a2bdd7849 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:33:31.399 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c_e2e-dns-7657_57d869aa-6d80-45a2-b9da-717b3303b6ad_0(361050e3bd32cfc19c457ebc904633de8aa3c6b42a603120e5ceebb921ab0cbb): [e2e-dns-7657/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:33:55.399 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c_e2e-dns-7657_57d869aa-6d80-45a2-b9da-717b3303b6ad_0(9b79e0b2bd10cfc906830e464c131d536f168e611a623d27ce54d7814e19203c): [e2e-dns-7657/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:34:16.489 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c_e2e-dns-7657_57d869aa-6d80-45a2-b9da-717b3303b6ad_0(e976ea09f00540a2a0525adcb90eba4f7185cfafcac5a5b44830889da3e9657e): [e2e-dns-7657/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?: Post "http://localhost:5036/addNetwork": EOF
Sep 09 08:34:17.108 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/NotReady
Sep 09 08:34:17.108 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Restarted
Sep 09 08:34:17.293 W clusteroperator/network changed Progressing to True: Deploying: DaemonSet "openshift-kuryr/kuryr-cni" is not available (awaiting 1 nodes)
Sep 09 08:34:32.478 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c reason/AddedInterface Add eth0 [10.128.155.209/23]
Sep 09 08:34:33.208 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:34:33.521 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Created
Sep 09 08:34:33.609 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Started
Sep 09 08:34:33.621 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:34:33.897 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Created
Sep 09 08:34:33.953 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Started
Sep 09 08:34:33.962 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Pulled image/gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0
Sep 09 08:34:34.219 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Created
Sep 09 08:34:34.285 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Started
Sep 09 08:34:35.136 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Ready
Sep 09 08:34:35.136 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Ready
Sep 09 08:34:35.136 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Ready
Sep 09 08:34:36.939 I ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Ready
Sep 09 08:34:37.307 W clusteroperator/network changed Progressing to False
Sep 09 08:34:43.627 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:34:43.652 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/Killing
Sep 09 08:34:43.669 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/Killing
Sep 09 08:34:43.706 I ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/Killing
Sep 09 08:34:47.177 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:34:47.177 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/jessie-querier reason/NotReady
Sep 09 08:34:47.177 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/webserver reason/NotReady
Sep 09 08:34:47.177 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 container/querier reason/NotReady
Sep 09 08:34:50.323 W ns/e2e-services-458 pod/execpod-affinityfngdd node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:34:50.340 I ns/e2e-services-458 pod/execpod-affinityfngdd node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/Killing
Sep 09 08:34:50.440 W ns/e2e-services-458 pod/affinity-clusterip-transition-ls2rh node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 1s
Sep 09 08:34:50.440 W ns/e2e-services-458 pod/affinity-clusterip-transition-f8v2l node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 1s
Sep 09 08:34:50.455 W ns/e2e-services-458 pod/affinity-clusterip-transition-7rkcx node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 1s
Sep 09 08:34:50.500 I ns/e2e-services-458 pod/affinity-clusterip-transition-ls2rh node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip-transition reason/Killing
Sep 09 08:34:50.500 I ns/e2e-services-458 pod/affinity-clusterip-transition-f8v2l node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip-transition reason/Killing
Sep 09 08:34:50.500 I ns/e2e-services-458 pod/affinity-clusterip-transition-7rkcx node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip-transition reason/Killing
Sep 09 08:34:50.977 W ns/e2e-dns-7657 pod/dns-test-ca33f01f-9cc5-4484-9afe-150c8f05906c node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:34:52.194 W ns/e2e-services-458 pod/execpod-affinityfngdd node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:34:52.194 W ns/e2e-services-458 pod/execpod-affinityfngdd node/ostest-5xqm8-worker-0-cbbx9 container/agnhost-pause reason/NotReady
Sep 09 08:34:52.756 W ns/e2e-services-458 pod/execpod-affinityfngdd node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:34:54.284 W ns/e2e-services-458 pod/affinity-clusterip-transition-7rkcx node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:34:54.284 W ns/e2e-services-458 pod/affinity-clusterip-transition-7rkcx node/ostest-5xqm8-worker-0-cbbx9 container/affinity-clusterip-transition reason/NotReady
Sep 09 08:34:54.322 W ns/e2e-services-458 pod/affinity-clusterip-transition-f8v2l node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:34:54.322 W ns/e2e-services-458 pod/affinity-clusterip-transition-f8v2l node/ostest-5xqm8-worker-0-rzx47 container/affinity-clusterip-transition reason/NotReady
Sep 09 08:34:54.438 W ns/e2e-services-458 pod/affinity-clusterip-transition-ls2rh node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:34:54.438 W ns/e2e-services-458 pod/affinity-clusterip-transition-ls2rh node/ostest-5xqm8-worker-0-twrlr container/affinity-clusterip-transition reason/NotReady
Sep 09 08:34:55.303 W ns/e2e-services-458 pod/affinity-clusterip-transition-7rkcx node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:34:57.258 W ns/e2e-services-458 pod/affinity-clusterip-transition-f8v2l node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:35:02.394 W ns/e2e-services-458 pod/affinity-clusterip-transition-ls2rh node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 04:35:02.567 I test="[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" failed
Sep 09 04:35:02.570 - 122s  I test="[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]" running
Sep 09 08:35:26.345 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (131 times)
Sep 09 08:35:47.168 W ns/openshift-marketplace pod/certified-operators-vf5w4 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:35:47.203 I ns/openshift-marketplace pod/certified-operators-vf5w4 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:35:47.206 W ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:35:47.227 I ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:35:47.266 I ns/openshift-marketplace pod/community-operators-w9s6x node/ reason/Created
Sep 09 08:35:47.270 I ns/openshift-marketplace pod/certified-operators-4cr27 node/ reason/Created
Sep 09 08:35:47.385 W ns/openshift-marketplace pod/redhat-marketplace-ln9xv node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:35:47.412 W ns/openshift-marketplace pod/redhat-operators-r877k node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:35:47.418 I ns/openshift-marketplace pod/certified-operators-4cr27 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:35:47.427 I ns/openshift-marketplace pod/community-operators-w9s6x node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:35:47.445 I ns/openshift-marketplace pod/redhat-marketplace-ln9xv node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:35:47.473 I ns/openshift-marketplace pod/redhat-marketplace-9crgn node/ reason/Created
Sep 09 08:35:47.486 I ns/openshift-marketplace pod/redhat-operators-jv25k node/ reason/Created
Sep 09 08:35:47.525 I ns/openshift-marketplace pod/redhat-operators-r877k node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:35:47.585 I ns/openshift-marketplace pod/redhat-marketplace-9crgn node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:35:47.662 I ns/openshift-marketplace pod/redhat-operators-jv25k node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:35:47.928 W ns/openshift-marketplace pod/redhat-operators-r877k node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe errored: rpc error: code = Unknown desc = container is not created or running
Sep 09 08:35:48.927 W ns/openshift-marketplace pod/certified-operators-vf5w4 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe errored: rpc error: code = NotFound desc = could not find container "ae2ed20eeda831067414f6c50c7f59c387dedb7fc7e49d46d3fe186657a4530c": container with ID starting with ae2ed20eeda831067414f6c50c7f59c387dedb7fc7e49d46d3fe186657a4530c not found: ID does not exist
Sep 09 08:35:49.009 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/NotReady
Sep 09 08:35:49.009 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Restarted
Sep 09 08:35:49.794 W ns/openshift-marketplace pod/redhat-marketplace-ln9xv node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:35:49.794 W ns/openshift-marketplace pod/redhat-marketplace-ln9xv node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:35:49.822 W clusteroperator/network changed Progressing to True: Deploying: Deployment "openshift-kuryr/kuryr-controller" is not available (awaiting 1 nodes)
Sep 09 08:35:49.900 W ns/openshift-marketplace pod/certified-operators-vf5w4 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:35:49.900 W ns/openshift-marketplace pod/certified-operators-vf5w4 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:35:49.994 W ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:35:49.994 W ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:35:50.065 W ns/openshift-marketplace pod/redhat-operators-r877k node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:35:50.065 W ns/openshift-marketplace pod/redhat-operators-r877k node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:36:03.968 - 29s   W ns/openshift-marketplace pod/redhat-marketplace-ln9xv node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:36:03.968 - 29s   W ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:36:03.968 - 29s   W ns/openshift-marketplace pod/redhat-operators-r877k node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:36:03.968 - 45s   W ns/openshift-marketplace pod/certified-operators-vf5w4 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:36:04.630 I ns/e2e-sched-preemption-path-3057 pod/without-label node/ reason/Created
Sep 09 08:36:04.731 I ns/e2e-sched-preemption-path-3057 pod/without-label node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:36:13.993 W ns/openshift-kuryr pod/kuryr-cni-7sd9x node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "http://10.196.2.196:8090/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (5 times)
Sep 09 08:36:42.063 W ns/openshift-marketplace pod/community-operators-4fm99 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:36:44.761 W ns/openshift-marketplace pod/redhat-marketplace-ln9xv node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:36:45.000 I ns/openshift-marketplace pod/community-operators-w9s6x reason/AddedInterface Add eth0 [10.128.3.52/23]
Sep 09 08:36:45.655 I ns/openshift-marketplace pod/community-operators-w9s6x node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/community-operator-index:latest
Sep 09 08:36:47.403 I ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Ready
Sep 09 08:36:47.580 W clusteroperator/network changed Progressing to False
Sep 09 08:36:48.155 I ns/openshift-marketplace pod/community-operators-w9s6x node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/community-operator-index:latest
Sep 09 08:36:48.425 I ns/openshift-marketplace pod/redhat-operators-jv25k reason/AddedInterface Add eth0 [10.128.3.89/23]
Sep 09 08:36:48.448 I ns/openshift-marketplace pod/community-operators-w9s6x node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:36:48.543 I ns/openshift-marketplace pod/community-operators-w9s6x node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:36:48.583 W ns/openshift-marketplace pod/redhat-operators-r877k node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:36:48.968 W ns/openshift-marketplace pod/redhat-operators-jv25k node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:36:48.968 W ns/openshift-marketplace pod/redhat-marketplace-9crgn node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:36:48.968 W ns/openshift-marketplace pod/community-operators-w9s6x node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:36:48.968 W ns/openshift-marketplace pod/certified-operators-4cr27 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:36:49.258 I ns/openshift-marketplace pod/redhat-operators-jv25k node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-operator-index:v4.6
Sep 09 08:36:49.424 I ns/openshift-marketplace pod/redhat-marketplace-9crgn reason/AddedInterface Add eth0 [10.128.3.205/23]
Sep 09 08:36:50.144 I ns/openshift-marketplace pod/redhat-marketplace-9crgn node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-marketplace-index:v4.6
Sep 09 08:36:50.180 I ns/openshift-marketplace pod/certified-operators-4cr27 reason/AddedInterface Add eth0 [10.128.3.14/23]
Sep 09 08:36:50.818 I ns/openshift-marketplace pod/certified-operators-4cr27 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/certified-operator-index:v4.6
Sep 09 08:36:51.340 W ns/openshift-marketplace pod/certified-operators-vf5w4 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:36:51.787 I ns/openshift-marketplace pod/redhat-operators-jv25k node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/redhat-operator-index:v4.6
Sep 09 08:36:52.092 I ns/openshift-marketplace pod/redhat-operators-jv25k node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:36:52.129 I ns/openshift-marketplace pod/redhat-operators-jv25k node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:36:53.465 I ns/openshift-marketplace pod/redhat-marketplace-9crgn node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/redhat-marketplace-index:v4.6
Sep 09 08:36:53.756 I ns/openshift-marketplace pod/redhat-marketplace-9crgn node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:36:53.780 I ns/openshift-marketplace pod/certified-operators-4cr27 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/certified-operator-index:v4.6
Sep 09 08:36:53.827 I ns/openshift-marketplace pod/redhat-marketplace-9crgn node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:36:54.072 I ns/openshift-marketplace pod/certified-operators-4cr27 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:36:54.142 I ns/openshift-marketplace pod/certified-operators-4cr27 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:36:55.006 I ns/openshift-marketplace pod/community-operators-w9s6x node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:36:57.174 I ns/openshift-marketplace pod/redhat-operators-jv25k node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:37:01.429 I ns/openshift-marketplace pod/certified-operators-4cr27 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 04:37:05.110 I test="[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]" failed
Sep 09 08:37:05.280 I ns/openshift-marketplace pod/redhat-marketplace-9crgn node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:37:10.050 W ns/e2e-sched-preemption-path-3057 pod/without-label node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 1s
Sep 09 08:37:18.968 - 420s  W ns/e2e-sched-preemption-path-3057 pod/without-label node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:38:07.335 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b1 node/ reason/Created
Sep 09 08:38:07.377 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b1 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:38:07.583 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b2 node/ reason/Created
Sep 09 08:38:07.627 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b2 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:38:21.800 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b2 reason/AddedInterface Add eth0 [10.128.151.55/23]
Sep 09 08:38:22.420 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b2 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:38:22.558 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b1 reason/AddedInterface Add eth0 [10.128.150.112/23]
Sep 09 08:38:22.723 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b2 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Created
Sep 09 08:38:22.779 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b2 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Started
Sep 09 08:38:23.163 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b1 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:38:23.187 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b2 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Ready
Sep 09 08:38:23.459 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b1 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Created
Sep 09 08:38:23.490 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b1 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Started
Sep 09 08:38:24.169 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b1 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Ready
Sep 09 08:38:25.930 W ns/default pod/kuryr-pod-852832911 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:38:25.943 I ns/default pod/kuryr-pod-852832911 reason/TaintManagerEviction Marking for deletion Pod default/kuryr-pod-852832911
Sep 09 08:38:25.954 W ns/openshift-ingress pod/router-default-6d6ccf5796-p2qjx node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 3600s
Sep 09 08:38:25.966 I ns/default pod/kuryr-pod-852832911 node/ostest-5xqm8-worker-0-rzx47 container/kuryr-pod-852832911 reason/Killing
Sep 09 08:38:25.975 I ns/openshift-ingress pod/router-default-6d6ccf5796-p2qjx reason/TaintManagerEviction Marking for deletion Pod openshift-ingress/router-default-6d6ccf5796-p2qjx
Sep 09 08:38:25.976 W ns/openshift-marketplace pod/community-operators-sh97c node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:38:25.977 W ns/openshift-monitoring pod/prometheus-adapter-58b5c9d9c7-th8l7 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:38:26.012 I ns/openshift-ingress pod/router-default-6d6ccf5796-p2qjx node/ostest-5xqm8-worker-0-rzx47 container/router reason/Killing
Sep 09 08:38:26.047 I ns/openshift-marketplace pod/community-operators-sh97c node/ostest-5xqm8-worker-0-rzx47 container/registry-server reason/Killing
Sep 09 08:38:26.047 I ns/openshift-monitoring pod/prometheus-adapter-58b5c9d9c7-th8l7 reason/TaintManagerEviction Marking for deletion Pod openshift-monitoring/prometheus-adapter-58b5c9d9c7-th8l7
Sep 09 08:38:26.078 I ns/openshift-marketplace pod/community-operators-sh97c reason/TaintManagerEviction Marking for deletion Pod openshift-marketplace/community-operators-sh97c
Sep 09 08:38:26.078 I ns/openshift-monitoring pod/prometheus-adapter-58b5c9d9c7-th8l7 node/ostest-5xqm8-worker-0-rzx47 container/prometheus-adapter reason/Killing
Sep 09 08:38:26.159 I ns/openshift-monitoring pod/prometheus-adapter-58b5c9d9c7-kcw92 node/ reason/Created
Sep 09 08:38:26.203 I ns/openshift-ingress pod/router-default-6d6ccf5796-85q5s node/ reason/Created
Sep 09 08:38:26.206 I ns/openshift-monitoring replicaset/prometheus-adapter-58b5c9d9c7 reason/SuccessfulCreate Created pod: prometheus-adapter-58b5c9d9c7-kcw92
Sep 09 08:38:26.267 I ns/openshift-ingress replicaset/router-default-6d6ccf5796 reason/SuccessfulCreate Created pod: router-default-6d6ccf5796-85q5s
Sep 09 08:38:26.296 I ns/openshift-monitoring pod/prometheus-adapter-58b5c9d9c7-kcw92 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:38:26.375 I ns/openshift-ingress pod/router-default-6d6ccf5796-85q5s node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:38:27.271 W ns/default pod/kuryr-pod-852832911 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:38:27.271 W ns/default pod/kuryr-pod-852832911 node/ostest-5xqm8-worker-0-rzx47 container/kuryr-pod-852832911 reason/NotReady
Sep 09 08:38:27.314 E ns/openshift-monitoring pod/prometheus-adapter-58b5c9d9c7-th8l7 node/ostest-5xqm8-worker-0-rzx47 container/prometheus-adapter container exited with code 2 (Error): er scope\nE0909 06:34:00.571702       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0909 07:03:37.662017       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0909 07:03:37.662170       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0909 07:03:37.725424       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0909 07:03:37.725671       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0909 08:13:38.991438       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0909 08:13:38.991674       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\n
Sep 09 08:38:27.357 I ns/openshift-ingress pod/router-default-6d6ccf5796-85q5s node/ostest-5xqm8-worker-0-cbbx9 container/router reason/Pulling image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36dab0a26843c2d5ad925f195626977545286d0393a66f34036367ff62c775fc
Sep 09 08:38:28.679 W ns/openshift-ingress pod/router-default-6d6ccf5796-p2qjx node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500
Sep 09 08:38:30.501 I ns/openshift-monitoring pod/prometheus-adapter-58b5c9d9c7-kcw92 reason/AddedInterface Add eth0 [10.128.54.198/23]
Sep 09 08:38:30.910 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b1 reason/TaintManagerEviction Marking for deletion Pod e2e-taint-multiple-pods-6435/taint-eviction-b1
Sep 09 08:38:30.921 W ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b1 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:38:30.940 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b1 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Killing
Sep 09 08:38:31.380 I ns/openshift-monitoring pod/prometheus-adapter-58b5c9d9c7-kcw92 node/ostest-5xqm8-worker-0-twrlr container/prometheus-adapter reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:465c1b16155d127cadc243c37b32e1c969ed6c3887bce9c95dc8a7a347682c63
Sep 09 08:38:31.851 I ns/openshift-monitoring pod/prometheus-adapter-58b5c9d9c7-kcw92 node/ostest-5xqm8-worker-0-twrlr container/prometheus-adapter reason/Created
Sep 09 08:38:32.035 I ns/openshift-monitoring pod/prometheus-adapter-58b5c9d9c7-kcw92 node/ostest-5xqm8-worker-0-twrlr container/prometheus-adapter reason/Started
Sep 09 08:38:32.276 W ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b1 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:38:32.276 W ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b1 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/NotReady
Sep 09 08:38:33.071 I ns/openshift-monitoring pod/prometheus-adapter-58b5c9d9c7-kcw92 node/ostest-5xqm8-worker-0-twrlr container/prometheus-adapter reason/Ready
Sep 09 08:38:33.329 W ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b1 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:38:33.968 W ns/default pod/kuryr-pod-852832911 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:38:35.588 I ns/openshift-ingress pod/router-default-6d6ccf5796-85q5s node/ostest-5xqm8-worker-0-cbbx9 container/router reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36dab0a26843c2d5ad925f195626977545286d0393a66f34036367ff62c775fc
Sep 09 08:38:35.772 I ns/openshift-ingress pod/router-default-6d6ccf5796-85q5s node/ostest-5xqm8-worker-0-cbbx9 container/router reason/Created
Sep 09 08:38:35.804 I ns/openshift-ingress pod/router-default-6d6ccf5796-85q5s node/ostest-5xqm8-worker-0-cbbx9 container/router reason/Started
Sep 09 08:38:37.245 W ns/default pod/kuryr-pod-852832911 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:38:37.321 W ns/openshift-marketplace pod/community-operators-sh97c node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:38:37.381 I ns/openshift-marketplace pod/community-operators-f9j76 node/ reason/Created
Sep 09 08:38:37.442 W ns/openshift-monitoring pod/prometheus-adapter-58b5c9d9c7-th8l7 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:38:37.642 I ns/openshift-marketplace pod/community-operators-f9j76 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:38:38.677 W ns/openshift-ingress pod/router-default-6d6ccf5796-p2qjx node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (2 times)
Sep 09 08:38:41.647 I ns/openshift-marketplace pod/community-operators-f9j76 reason/AddedInterface Add eth0 [10.128.2.120/23]
Sep 09 08:38:42.314 I ns/openshift-marketplace pod/community-operators-f9j76 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/community-operator-index:latest
Sep 09 08:38:44.865 I ns/openshift-marketplace pod/community-operators-f9j76 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/community-operator-index:latest
Sep 09 08:38:45.031 I ns/openshift-marketplace pod/community-operators-f9j76 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:38:45.134 I ns/openshift-marketplace pod/community-operators-f9j76 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:38:48.695 W ns/openshift-ingress pod/router-default-6d6ccf5796-p2qjx node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (3 times)
Sep 09 08:38:48.724 W ns/openshift-ingress pod/router-default-6d6ccf5796-p2qjx node/ostest-5xqm8-worker-0-rzx47 container/router reason/NotReady
Sep 09 08:38:49.921 I ns/openshift-ingress pod/router-default-6d6ccf5796-85q5s node/ostest-5xqm8-worker-0-cbbx9 container/router reason/Ready
Sep 09 08:38:50.926 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b2 reason/TaintManagerEviction Marking for deletion Pod e2e-taint-multiple-pods-6435/taint-eviction-b2
Sep 09 08:38:50.937 W ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b2 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:38:50.962 I ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b2 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Killing
Sep 09 08:38:52.370 W ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b2 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:38:52.370 W ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b2 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/NotReady
Sep 09 08:38:53.390 W ns/e2e-taint-multiple-pods-6435 pod/taint-eviction-b2 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:38:55.407 I ns/e2e-sched-pred-6375 pod/without-label node/ reason/Created
Sep 09 08:38:55.451 I ns/e2e-sched-pred-6375 pod/without-label node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:38:56.196 I ns/openshift-marketplace pod/community-operators-f9j76 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:38:58.679 W ns/openshift-ingress pod/router-default-6d6ccf5796-p2qjx node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (4 times)
Sep 09 08:39:08.680 W ns/openshift-ingress pod/router-default-6d6ccf5796-p2qjx node/ostest-5xqm8-worker-0-rzx47 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (5 times)
Sep 09 08:39:17.241 W ns/openshift-ingress pod/router-default-6d6ccf5796-p2qjx node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:39:18.565 I ns/e2e-sched-pred-6375 pod/without-label reason/AddedInterface Add eth0 [10.128.119.130/23]
Sep 09 08:39:19.228 I ns/e2e-sched-pred-6375 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:39:19.523 I ns/e2e-sched-pred-6375 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Created
Sep 09 08:39:19.566 I ns/e2e-sched-pred-6375 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Started
Sep 09 08:39:20.502 I ns/e2e-sched-pred-6375 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Ready
Sep 09 08:39:21.463 W ns/e2e-sched-pred-6375 pod/without-label node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:39:21.570 I ns/e2e-sched-pred-6375 pod/with-labels node/ reason/Created
Sep 09 08:39:21.650 I ns/e2e-sched-pred-6375 pod/with-labels node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:39:22.489 I ns/e2e-sched-pred-6375 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Killing
Sep 09 08:39:24.273 W ns/e2e-sched-pred-6375 pod/without-label node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:39:25.818 I ns/e2e-sched-pred-6375 pod/with-labels reason/AddedInterface Add eth0 [10.128.118.85/23]
Sep 09 08:39:26.482 I ns/e2e-sched-pred-6375 pod/with-labels node/ostest-5xqm8-worker-0-rzx47 container/with-labels reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:39:26.722 I ns/e2e-sched-pred-6375 pod/with-labels node/ostest-5xqm8-worker-0-rzx47 container/with-labels reason/Created
Sep 09 08:39:26.769 I ns/e2e-sched-pred-6375 pod/with-labels node/ostest-5xqm8-worker-0-rzx47 container/with-labels reason/Started
Sep 09 08:39:27.529 I ns/e2e-sched-pred-6375 pod/with-labels node/ostest-5xqm8-worker-0-rzx47 container/with-labels reason/Ready
Sep 09 08:39:29.056 I ns/e2e-sched-pred-3807 pod/without-label node/ reason/Created
Sep 09 08:39:29.106 I ns/e2e-sched-pred-3807 pod/without-label node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:39:33.339 W ns/e2e-sched-pred-6375 pod/with-labels node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 1s
Sep 09 08:39:35.595 W ns/e2e-sched-pred-6375 pod/with-labels node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:39:52.138 I ns/e2e-sched-pred-3807 pod/without-label reason/AddedInterface Add eth0 [10.128.121.26/23]
Sep 09 08:39:52.847 I ns/e2e-sched-pred-3807 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:39:53.123 I ns/e2e-sched-pred-3807 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Created
Sep 09 08:39:53.178 I ns/e2e-sched-pred-3807 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Started
Sep 09 08:39:53.636 I ns/e2e-sched-pred-3807 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Ready
Sep 09 08:39:55.105 W ns/e2e-sched-pred-3807 pod/without-label node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:39:55.174 I ns/e2e-sched-pred-3807 pod/pod4 node/ reason/Created
Sep 09 08:39:55.243 I ns/e2e-sched-pred-3807 pod/pod4 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:39:55.610 I ns/e2e-sched-pred-3807 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Killing
Sep 09 08:39:57.893 W ns/e2e-sched-pred-3807 pod/without-label node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:39:59.323 I ns/e2e-sched-pred-3807 pod/pod4 reason/AddedInterface Add eth0 [10.128.121.175/23]
Sep 09 08:39:59.979 I ns/e2e-sched-pred-3807 pod/pod4 node/ostest-5xqm8-worker-0-rzx47 container/pod4 reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:40:00.209 I ns/e2e-sched-pred-3807 pod/pod4 node/ostest-5xqm8-worker-0-rzx47 container/pod4 reason/Created
Sep 09 08:40:00.261 I ns/e2e-sched-pred-3807 pod/pod4 node/ostest-5xqm8-worker-0-rzx47 container/pod4 reason/Started
Sep 09 08:40:00.751 I ns/e2e-sched-pred-3807 pod/pod4 node/ostest-5xqm8-worker-0-rzx47 container/pod4 reason/Ready
Sep 09 08:40:01.275 I ns/e2e-sched-pred-3807 pod/pod5 node/ reason/Created
Sep 09 08:40:01.312 W ns/e2e-sched-pred-3807 pod/pod5 reason/FailedScheduling 0/6 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 5 node(s) didn't match node selector.
Sep 09 08:40:01.327 W ns/e2e-sched-pred-3807 pod/pod5 reason/FailedScheduling 0/6 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 5 node(s) didn't match node selector.
Sep 09 08:41:03.968 - 240s  W ns/e2e-sched-pred-3807 pod/pod5 node/ pod has been pending longer than a minute
Sep 09 08:41:06.332 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (133 times)
Sep 09 08:41:18.900 W ns/e2e-sched-pred-3807 pod/pod5 reason/FailedScheduling 0/6 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 5 node(s) didn't match node selector.
Sep 09 08:44:30.026 W ns/e2e-sched-preemption-path-3057 pod/without-label node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:45:03.582 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-h254n node/ reason/Created
Sep 09 08:45:03.653 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e reason/SuccessfulCreate Created pod: wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-h254n
Sep 09 08:45:03.719 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-xn7q9 node/ reason/Created
Sep 09 08:45:03.719 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ reason/Created
Sep 09 08:45:03.779 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e reason/SuccessfulCreate Created pod: wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9
Sep 09 08:45:03.779 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-h254n node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:45:03.816 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-xn7q9 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:45:03.825 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-lpph8 node/ reason/Created
Sep 09 08:45:03.839 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e reason/SuccessfulCreate Created pod: wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-xn7q9
Sep 09 08:45:03.850 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ reason/Created
Sep 09 08:45:03.873 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e reason/SuccessfulCreate Created pod: wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt
Sep 09 08:45:03.895 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:45:03.946 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-lpph8 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:45:03.946 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e reason/SuccessfulCreate Created pod: wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-lpph8
Sep 09 08:45:03.955 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:45:07.985 W ns/e2e-sched-pred-3807 pod/pod4 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 1s
Sep 09 08:45:08.000 W ns/e2e-sched-pred-3807 pod/pod5 node/ reason/GracefulDelete in 0s
Sep 09 08:45:08.011 W ns/e2e-sched-pred-3807 pod/pod5 node/ reason/Deleted
Sep 09 08:45:10.004 W ns/e2e-sched-pred-3807 pod/pod4 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:45:10.004 W ns/e2e-sched-pred-3807 pod/pod4 node/ostest-5xqm8-worker-0-rzx47 container/pod4 reason/NotReady
Sep 09 08:45:17.227 W ns/e2e-sched-pred-3807 pod/pod4 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:45:25.343 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (99 times)
Sep 09 08:45:26.482 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-lpph8 reason/AddedInterface Add eth0 [10.128.119.207/23]
Sep 09 08:45:26.981 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-h254n reason/AddedInterface Add eth0 [10.128.118.164/23]
Sep 09 08:45:27.397 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-xn7q9 reason/AddedInterface Add eth0 [10.128.118.173/23]
Sep 09 08:45:27.476 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-lpph8 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:45:27.771 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-h254n node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:45:27.896 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-lpph8 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Created
Sep 09 08:45:27.956 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-lpph8 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Started
Sep 09 08:45:28.154 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-h254n node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Created
Sep 09 08:45:28.169 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-xn7q9 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:45:28.234 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-h254n node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Started
Sep 09 08:45:28.379 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-lpph8 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Ready
Sep 09 08:45:28.443 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-h254n node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Ready
Sep 09 08:45:28.559 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-xn7q9 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Created
Sep 09 08:45:28.632 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-xn7q9 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Started
Sep 09 08:45:29.365 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-xn7q9 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Ready
Sep 09 08:46:03.968 - 210s  W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:46:03.968 - 225s  W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:46:08.242 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/NotReady
Sep 09 08:46:08.242 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Restarted
Sep 09 08:46:08.956 W clusteroperator/network changed Progressing to True: Deploying: Deployment "openshift-kuryr/kuryr-controller" is not available (awaiting 1 nodes)
Sep 09 08:47:10.427 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9_e2e-emptydir-wrapper-3207_a0aa3bec-b1ae-4386-8977-585c11f79b71_0(038362c374d52ee0d0ac561df5272a195828eefe748cd1d51ad12d8cb61723e9): netplugin failed: "2020/09/09 08:45:04 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-emptydir-wrapper-3207;K8S_POD_NAME=wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9;K8S_POD_INFRA_CONTAINER_ID=038362c374d52ee0d0ac561df5272a195828eefe748cd1d51ad12d8cb61723e9, CNI_NETNS=/var/run/netns/dc9b572f-6282-42ae-85eb-fade2f2f8d91).\n"
Sep 09 08:47:10.625 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt_e2e-emptydir-wrapper-3207_b270e65f-9ba1-47ab-b5ff-8168912b24b3_0(08ad5ff8ddd7eab35261d2f862a72f623b6cfe1f6332dcdae1758c2580f65727): netplugin failed: "2020/09/09 08:45:04 Calling kuryr-daemon with ADD request (CNI_ARGS=IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-emptydir-wrapper-3207;K8S_POD_NAME=wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt;K8S_POD_INFRA_CONTAINER_ID=08ad5ff8ddd7eab35261d2f862a72f623b6cfe1f6332dcdae1758c2580f65727, CNI_NETNS=/var/run/netns/334143ca-dd56-48fd-b8ad-ec6762cccf21).\n"
Sep 09 08:47:15.816 I ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 container/controller reason/Ready
Sep 09 08:47:15.960 W clusteroperator/network changed Progressing to False
Sep 09 08:47:31.955 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9_e2e-emptydir-wrapper-3207_a0aa3bec-b1ae-4386-8977-585c11f79b71_0(03b761d552b4cde6515232dfd9af0e9a3aaad907bc1941db48207acbd903af7b): [e2e-emptydir-wrapper-3207/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:47:33.473 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt_e2e-emptydir-wrapper-3207_b270e65f-9ba1-47ab-b5ff-8168912b24b3_0(bb01c04044dcd3daa8823a6a68aa3d9a1380554e8de620827a2b8047f5528658): [e2e-emptydir-wrapper-3207/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:47:56.681 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt_e2e-emptydir-wrapper-3207_b270e65f-9ba1-47ab-b5ff-8168912b24b3_0(560b871275749f100448fc14f17af0ac204886309d9cd6f4fe0a13e40a365a57): [e2e-emptydir-wrapper-3207/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:47:56.708 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9_e2e-emptydir-wrapper-3207_a0aa3bec-b1ae-4386-8977-585c11f79b71_0(c0c2f4e180288174b0a331a86fabaf373dabba0f7936e9f00eadd68cc85db892): [e2e-emptydir-wrapper-3207/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:48:02.366 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (74 times)
Sep 09 08:48:12.384 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (75 times)
Sep 09 08:48:19.416 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9_e2e-emptydir-wrapper-3207_a0aa3bec-b1ae-4386-8977-585c11f79b71_0(fbdf525150a0097e0b3e931a520e8f91c0acf034aa5b60caa9fd19b84ea46b9a): [e2e-emptydir-wrapper-3207/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:48:22.387 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt_e2e-emptydir-wrapper-3207_b270e65f-9ba1-47ab-b5ff-8168912b24b3_0(86a9855bd8a38601da8b757e988ade321c9605789331cd3bde8b738000ad0c7b): [e2e-emptydir-wrapper-3207/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:48:22.400 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 500 (76 times)
Sep 09 08:48:45.567 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9_e2e-emptydir-wrapper-3207_a0aa3bec-b1ae-4386-8977-585c11f79b71_0(d81a9d194c18cf7ef179a26bc5e29d467d1b941eacaa3737660c2962444a7547): [e2e-emptydir-wrapper-3207/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:48:46.327 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt_e2e-emptydir-wrapper-3207_b270e65f-9ba1-47ab-b5ff-8168912b24b3_0(ee7202fe17102d8afb1cb52b1b00fd0b13f68a6fbc396fb42c9bc02394a0206a): [e2e-emptydir-wrapper-3207/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:49:10.506 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt_e2e-emptydir-wrapper-3207_b270e65f-9ba1-47ab-b5ff-8168912b24b3_0(674c1aeec94f0758fc58bd4155ff2a11907330fecb81d55a4126450058ba141b): [e2e-emptydir-wrapper-3207/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:49:10.545 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9_e2e-emptydir-wrapper-3207_a0aa3bec-b1ae-4386-8977-585c11f79b71_0(9b82539c1c565dacef6967c02bb4cc7ea48674b1bd1316ba2b484d67a8016114): [e2e-emptydir-wrapper-3207/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:49:31.541 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt_e2e-emptydir-wrapper-3207_b270e65f-9ba1-47ab-b5ff-8168912b24b3_0(134dd50930311db27377bfb4d9989c7fd258d8d8a4c3d4bb39d55da406e5a26b): [e2e-emptydir-wrapper-3207/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 
Sep 09 08:49:36.469 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9_e2e-emptydir-wrapper-3207_a0aa3bec-b1ae-4386-8977-585c11f79b71_0(52cf452807dcd1a12b99e0301a451603cd250352ceee90c42978ea963fc2aae8): [e2e-emptydir-wrapper-3207/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9:kuryr]: error adding container to network "kuryr": CNI Daemon returned error 500 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n
Sep 09 08:49:37.506 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/NotReady
Sep 09 08:49:37.506 W ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Restarted
Sep 09 08:49:37.799 W clusteroperator/network changed Progressing to True: Deploying: DaemonSet "openshift-kuryr/kuryr-cni" is not available (awaiting 1 nodes)
Sep 09 08:49:45.593 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt reason/AddedInterface Add eth0 [10.128.119.112/23]
Sep 09 08:49:46.267 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:49:46.679 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Created
Sep 09 08:49:46.736 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Started
Sep 09 08:49:47.596 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Ready
Sep 09 08:49:51.652 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 reason/AddedInterface Add eth0 [10.128.118.126/23]
Sep 09 08:49:52.527 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:49:52.952 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Created
Sep 09 08:49:53.030 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Started
Sep 09 08:49:53.611 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Ready
Sep 09 08:49:54.792 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:49:54.813 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-lpph8 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:49:54.837 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-xn7q9 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:49:54.839 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:49:54.840 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-h254n node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:49:54.877 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-xn7q9 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Killing
Sep 09 08:49:54.901 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-lpph8 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Killing
Sep 09 08:49:54.918 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Killing
Sep 09 08:49:54.951 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-h254n node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Killing
Sep 09 08:49:55.810 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/Killing
Sep 09 08:49:56.866 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-xn7q9 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:49:56.866 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-xn7q9 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/NotReady
Sep 09 08:49:56.910 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-lpph8 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:49:56.910 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-lpph8 node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/NotReady
Sep 09 08:49:57.041 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:49:57.041 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/NotReady
Sep 09 08:49:57.096 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-h254n node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:49:57.096 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-h254n node/ostest-5xqm8-worker-0-cbbx9 container/test-container reason/NotReady
Sep 09 08:49:57.189 I ns/openshift-kuryr pod/kuryr-cni-qjsxf node/ostest-5xqm8-worker-0-cbbx9 container/kuryr-cni reason/Ready
Sep 09 08:49:57.321 W clusteroperator/network changed Progressing to False
Sep 09 08:50:03.968 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-xn7q9 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:50:03.968 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-lpph8 node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:50:03.968 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:50:03.968 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-h254n node/ostest-5xqm8-worker-0-cbbx9 pod has been pending longer than a minute
Sep 09 08:50:06.801 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-xn7q9 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:50:06.868 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-lpph8 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:50:06.928 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-cspm9 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:50:07.003 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-tn8tt node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:50:07.075 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-291fcc3b-b4de-41c4-b2a8-5603c564005e-h254n node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:50:07.213 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-7hsq4 node/ reason/Created
Sep 09 08:50:07.221 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c reason/SuccessfulCreate Created pod: wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-7hsq4
Sep 09 08:50:07.282 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-fnngk node/ reason/Created
Sep 09 08:50:07.298 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-7hsq4 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:50:07.317 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c reason/SuccessfulCreate Created pod: wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-fnngk
Sep 09 08:50:07.385 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-jmgbj node/ reason/Created
Sep 09 08:50:07.424 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c reason/SuccessfulCreate Created pod: wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-jmgbj
Sep 09 08:50:07.508 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-cgstj node/ reason/Created
Sep 09 08:50:07.515 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-fnngk node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:50:07.535 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-jmgbj node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:50:07.535 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c reason/SuccessfulCreate Created pod: wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-cgstj
Sep 09 08:50:07.540 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-b6knq node/ reason/Created
Sep 09 08:50:07.569 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c reason/SuccessfulCreate Created pod: wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-b6knq
Sep 09 08:50:07.653 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-cgstj node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:50:07.732 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-b6knq node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:50:21.458 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-7hsq4 reason/AddedInterface Add eth0 [10.128.118.226/23]
Sep 09 08:50:22.058 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-cgstj reason/AddedInterface Add eth0 [10.128.118.177/23]
Sep 09 08:50:22.201 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-7hsq4 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:50:22.649 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-7hsq4 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:50:22.706 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-7hsq4 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:50:22.828 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-cgstj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:50:23.184 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-cgstj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:50:23.249 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-cgstj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:50:23.693 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-cgstj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Ready
Sep 09 08:50:23.757 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-7hsq4 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Ready
Sep 09 08:50:23.865 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-fnngk reason/AddedInterface Add eth0 [10.128.118.251/23]
Sep 09 08:50:24.553 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-fnngk node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:50:24.901 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-fnngk node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:50:24.960 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-fnngk node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:50:26.384 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-fnngk node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Ready
Sep 09 08:50:46.342 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (137 times)
Sep 09 08:50:47.160 W ns/openshift-marketplace pod/certified-operators-4cr27 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:50:47.183 W ns/openshift-marketplace pod/community-operators-w9s6x node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:50:47.200 I ns/openshift-marketplace pod/certified-operators-4cr27 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:50:47.215 I ns/openshift-marketplace pod/community-operators-w9s6x node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:50:47.223 I ns/openshift-marketplace pod/community-operators-rq8jr node/ reason/Created
Sep 09 08:50:47.229 I ns/openshift-marketplace pod/certified-operators-bjf7t node/ reason/Created
Sep 09 08:50:47.260 I ns/openshift-marketplace pod/community-operators-rq8jr node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:50:47.262 I ns/openshift-marketplace pod/certified-operators-bjf7t node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:50:47.328 W ns/openshift-marketplace pod/redhat-marketplace-9crgn node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:50:47.366 W ns/openshift-marketplace pod/redhat-operators-jv25k node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:50:47.386 I ns/openshift-marketplace pod/redhat-marketplace-9crgn node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:50:47.399 I ns/openshift-marketplace pod/redhat-marketplace-njpdl node/ reason/Created
Sep 09 08:50:47.407 I ns/openshift-marketplace pod/redhat-operators-p4bkw node/ reason/Created
Sep 09 08:50:47.418 I ns/openshift-marketplace pod/redhat-operators-jv25k node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 08:50:47.505 I ns/openshift-marketplace pod/redhat-marketplace-njpdl node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:50:47.512 I ns/openshift-marketplace pod/redhat-operators-p4bkw node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:50:49.575 W ns/openshift-marketplace pod/community-operators-w9s6x node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Liveness probe errored: rpc error: code = NotFound desc = could not find container "92679afd1a977adcd9cbbba44e10d090367d875cd7b8c3112c37e2d2abf81de2": container with ID starting with 92679afd1a977adcd9cbbba44e10d090367d875cd7b8c3112c37e2d2abf81de2 not found: ID does not exist
Sep 09 08:50:50.446 W ns/openshift-marketplace pod/community-operators-w9s6x node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:50:50.446 W ns/openshift-marketplace pod/community-operators-w9s6x node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:50:51.261 W ns/openshift-marketplace pod/certified-operators-4cr27 node/ostest-5xqm8-worker-0-cbbx9 reason/Unhealthy Readiness probe errored: rpc error: code = NotFound desc = could not find container "af357952e5103b0fb016928335b4033ffa885ad451cd9f74e65b180d724aee55": container with ID starting with af357952e5103b0fb016928335b4033ffa885ad451cd9f74e65b180d724aee55 not found: ID does not exist
Sep 09 08:50:51.500 W ns/openshift-marketplace pod/redhat-marketplace-9crgn node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:50:51.500 W ns/openshift-marketplace pod/redhat-marketplace-9crgn node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:50:51.555 W ns/openshift-marketplace pod/redhat-operators-jv25k node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:50:51.555 W ns/openshift-marketplace pod/redhat-operators-jv25k node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:50:51.675 W ns/openshift-marketplace pod/certified-operators-4cr27 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:50:51.675 W ns/openshift-marketplace pod/certified-operators-4cr27 node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/NotReady
Sep 09 08:50:52.393 W ns/openshift-marketplace pod/community-operators-w9s6x node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:50:52.415 W ns/openshift-marketplace pod/redhat-marketplace-9crgn node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:50:52.415 W ns/openshift-marketplace pod/certified-operators-4cr27 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:50:52.423 W ns/openshift-marketplace pod/redhat-operators-jv25k node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:50:54.309 I ns/openshift-marketplace pod/certified-operators-bjf7t reason/AddedInterface Add eth0 [10.128.2.238/23]
Sep 09 08:50:54.583 I ns/openshift-marketplace pod/redhat-operators-p4bkw reason/AddedInterface Add eth0 [10.128.2.178/23]
Sep 09 08:50:54.768 I ns/openshift-marketplace pod/community-operators-rq8jr reason/AddedInterface Add eth0 [10.128.2.211/23]
Sep 09 08:50:54.819 I ns/openshift-marketplace pod/redhat-marketplace-njpdl reason/AddedInterface Add eth0 [10.128.2.149/23]
Sep 09 08:50:55.266 I ns/openshift-marketplace pod/certified-operators-bjf7t node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/certified-operator-index:v4.6
Sep 09 08:50:55.624 I ns/openshift-marketplace pod/redhat-operators-p4bkw node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-operator-index:v4.6
Sep 09 08:50:55.740 I ns/openshift-marketplace pod/redhat-marketplace-njpdl node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-marketplace-index:v4.6
Sep 09 08:50:55.805 I ns/openshift-marketplace pod/community-operators-rq8jr node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/community-operator-index:latest
Sep 09 08:50:59.672 I ns/openshift-marketplace pod/community-operators-rq8jr node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/community-operator-index:latest
Sep 09 08:50:59.994 I ns/openshift-marketplace pod/community-operators-rq8jr node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:51:00.049 I ns/openshift-marketplace pod/community-operators-rq8jr node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:51:00.840 I ns/openshift-marketplace pod/certified-operators-bjf7t node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/certified-operator-index:v4.6
Sep 09 08:51:01.161 I ns/openshift-marketplace pod/certified-operators-bjf7t node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:51:01.205 I ns/openshift-marketplace pod/certified-operators-bjf7t node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:51:01.419 I ns/openshift-marketplace pod/redhat-marketplace-njpdl node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/redhat-marketplace-index:v4.6
Sep 09 08:51:01.744 I ns/openshift-marketplace pod/redhat-operators-p4bkw node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/redhat-operator-index:v4.6
Sep 09 08:51:01.841 I ns/openshift-marketplace pod/redhat-marketplace-njpdl node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:51:01.954 I ns/openshift-marketplace pod/redhat-marketplace-njpdl node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:51:02.145 I ns/openshift-marketplace pod/redhat-operators-p4bkw node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 08:51:02.249 I ns/openshift-marketplace pod/redhat-operators-p4bkw node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 08:51:10.868 I ns/openshift-marketplace pod/redhat-operators-p4bkw node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:51:11.128 I ns/openshift-marketplace pod/certified-operators-bjf7t node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:51:12.334 I ns/openshift-marketplace pod/community-operators-rq8jr node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:51:14.543 I ns/openshift-marketplace pod/redhat-marketplace-njpdl node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 08:51:18.970 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-b6knq node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:51:18.970 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-jmgbj node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:51:26.154 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-jmgbj reason/AddedInterface Add eth0 [10.128.119.217/23]
Sep 09 08:51:26.406 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-b6knq reason/AddedInterface Add eth0 [10.128.118.42/23]
Sep 09 08:51:26.919 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-jmgbj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:51:27.149 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-b6knq node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:51:27.252 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-jmgbj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:51:27.327 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-jmgbj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:51:27.486 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-b6knq node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:51:27.551 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-b6knq node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:51:27.912 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-b6knq node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Ready
Sep 09 08:51:27.962 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-jmgbj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Ready
Sep 09 08:51:28.435 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-7hsq4 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:51:28.478 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-cgstj node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:51:28.480 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-jmgbj node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:51:28.491 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-b6knq node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:51:28.526 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-7hsq4 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Killing
Sep 09 08:51:28.539 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-fnngk node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:51:28.558 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-cgstj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Killing
Sep 09 08:51:28.590 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-fnngk node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Killing
Sep 09 08:51:29.947 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-jmgbj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Killing
Sep 09 08:51:29.973 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-b6knq node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Killing
Sep 09 08:51:30.033 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-fnngk node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:51:30.033 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-fnngk node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/NotReady
Sep 09 08:51:30.132 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-cgstj node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:51:30.132 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-cgstj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/NotReady
Sep 09 08:51:30.289 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-7hsq4 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:51:30.289 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-7hsq4 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/NotReady
Sep 09 08:51:33.968 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-fnngk node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:51:33.968 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-cgstj node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:51:33.968 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-7hsq4 node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:51:37.278 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-7hsq4 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:51:37.357 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-fnngk node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:51:37.429 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-jmgbj node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:51:37.520 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-cgstj node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:51:37.657 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-296f7da7-047e-49cd-a0b2-49e390f2377c-b6knq node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:51:37.886 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-497t6 node/ reason/Created
Sep 09 08:51:37.909 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2 reason/SuccessfulCreate Created pod: wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-497t6
Sep 09 08:51:37.967 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-497t6 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:51:37.982 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-njlsj node/ reason/Created
Sep 09 08:51:38.001 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2 reason/SuccessfulCreate Created pod: wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-njlsj
Sep 09 08:51:38.027 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-wxj5q node/ reason/Created
Sep 09 08:51:38.050 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2 reason/SuccessfulCreate Created pod: wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-wxj5q
Sep 09 08:51:38.050 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-njlsj node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:51:38.115 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-xx6ks node/ reason/Created
Sep 09 08:51:38.129 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-wxj5q node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:51:38.132 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-j7g86 node/ reason/Created
Sep 09 08:51:38.152 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2 reason/SuccessfulCreate Created pod: wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-xx6ks
Sep 09 08:51:38.199 I ns/e2e-emptydir-wrapper-3207 replicationcontroller/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2 reason/SuccessfulCreate Created pod: wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-j7g86
Sep 09 08:51:38.202 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-xx6ks node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:51:38.288 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-j7g86 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:51:41.994 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-497t6 reason/AddedInterface Add eth0 [10.128.118.92/23]
Sep 09 08:51:42.621 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-497t6 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:51:43.070 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-497t6 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:51:43.163 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-497t6 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:51:44.080 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-497t6 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Ready
Sep 09 08:51:50.764 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-xx6ks reason/AddedInterface Add eth0 [10.128.118.194/23]
Sep 09 08:51:51.489 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-xx6ks node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:51:51.822 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-xx6ks node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:51:51.881 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-xx6ks node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:51:52.093 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-xx6ks node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Ready
Sep 09 08:51:52.557 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-njlsj reason/AddedInterface Add eth0 [10.128.119.210/23]
Sep 09 08:51:52.567 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-wxj5q reason/AddedInterface Add eth0 [10.128.118.142/23]
Sep 09 08:51:53.320 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-njlsj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:51:53.335 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-wxj5q node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:51:53.685 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-wxj5q node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:51:53.731 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-njlsj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:51:53.740 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-wxj5q node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:51:53.775 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-njlsj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:51:54.112 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-wxj5q node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Ready
Sep 09 08:51:54.178 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-njlsj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Ready
Sep 09 08:52:01.711 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-j7g86 reason/AddedInterface Add eth0 [10.128.118.42/23]
Sep 09 08:52:02.462 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-j7g86 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:52:02.799 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-j7g86 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Created
Sep 09 08:52:02.851 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-j7g86 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Started
Sep 09 08:52:03.188 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-j7g86 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Ready
Sep 09 08:52:05.163 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-wxj5q node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:52:05.175 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-497t6 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:52:05.178 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-j7g86 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:52:05.178 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-xx6ks node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:52:05.180 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-njlsj node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:52:05.198 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-wxj5q node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Killing
Sep 09 08:52:05.231 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-497t6 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Killing
Sep 09 08:52:05.243 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-j7g86 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Killing
Sep 09 08:52:05.253 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-xx6ks node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Killing
Sep 09 08:52:05.273 I ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-njlsj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/Killing
Sep 09 08:52:07.565 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-xx6ks node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:52:07.565 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-xx6ks node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/NotReady
Sep 09 08:52:07.710 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-njlsj node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:52:07.710 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-njlsj node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/NotReady
Sep 09 08:52:07.763 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-j7g86 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:52:07.763 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-j7g86 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/NotReady
Sep 09 08:52:07.823 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-497t6 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:52:07.823 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-497t6 node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/NotReady
Sep 09 08:52:07.865 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-wxj5q node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:52:07.865 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-wxj5q node/ostest-5xqm8-worker-0-rzx47 container/test-container reason/NotReady
Sep 09 08:52:17.265 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-xx6ks node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:52:17.370 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-497t6 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:52:17.460 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-j7g86 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:52:17.534 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-wxj5q node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:52:17.608 W ns/e2e-emptydir-wrapper-3207 pod/wrapped-volume-race-b4e4ad4e-d101-4af9-af08-ce28e196eae2-njlsj node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:52:19.948 I ns/e2e-daemonsets-9750 pod/daemon-set-g49pq node/ reason/Created
Sep 09 08:52:19.965 I ns/e2e-daemonsets-9750 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-g49pq
Sep 09 08:52:20.021 I ns/e2e-daemonsets-9750 pod/daemon-set-g49pq node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:52:43.618 I ns/e2e-daemonsets-9750 pod/daemon-set-g49pq reason/AddedInterface Add eth0 [10.128.120.110/23]
Sep 09 08:52:44.327 I ns/e2e-daemonsets-9750 pod/daemon-set-g49pq node/ostest-5xqm8-worker-0-twrlr container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:52:44.743 I ns/e2e-daemonsets-9750 pod/daemon-set-g49pq node/ostest-5xqm8-worker-0-twrlr container/app reason/Created
Sep 09 08:52:44.797 I ns/e2e-daemonsets-9750 pod/daemon-set-g49pq node/ostest-5xqm8-worker-0-twrlr container/app reason/Started
Sep 09 08:52:45.744 I ns/e2e-daemonsets-9750 pod/daemon-set-g49pq node/ostest-5xqm8-worker-0-twrlr container/app reason/Ready
Sep 09 08:52:45.987 W ns/e2e-daemonsets-9750 pod/daemon-set-g49pq node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:52:46.004 I ns/e2e-daemonsets-9750 daemonset/daemon-set reason/SuccessfulDelete Deleted pod: daemon-set-g49pq
Sep 09 08:52:47.737 I ns/e2e-daemonsets-9750 pod/daemon-set-g49pq node/ostest-5xqm8-worker-0-twrlr container/app reason/Killing
Sep 09 08:52:52.382 W ns/e2e-daemonsets-9750 pod/daemon-set-g49pq node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:52:52.425 I ns/e2e-daemonsets-9750 pod/daemon-set-c8p65 node/ reason/Created
Sep 09 08:52:52.441 I ns/e2e-daemonsets-9750 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-c8p65
Sep 09 08:52:52.471 I ns/e2e-daemonsets-9750 pod/daemon-set-c8p65 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:52:56.581 I ns/e2e-daemonsets-9750 pod/daemon-set-c8p65 reason/AddedInterface Add eth0 [10.128.121.190/23]
Sep 09 08:52:57.488 I ns/e2e-daemonsets-9750 pod/daemon-set-c8p65 node/ostest-5xqm8-worker-0-twrlr container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:52:57.831 I ns/e2e-daemonsets-9750 pod/daemon-set-c8p65 node/ostest-5xqm8-worker-0-twrlr container/app reason/Created
Sep 09 08:52:57.929 I ns/e2e-daemonsets-9750 pod/daemon-set-c8p65 node/ostest-5xqm8-worker-0-twrlr container/app reason/Started
Sep 09 08:52:58.836 I ns/e2e-daemonsets-9750 pod/daemon-set-c8p65 node/ostest-5xqm8-worker-0-twrlr container/app reason/Ready
Sep 09 08:52:59.315 W ns/e2e-daemonsets-9750 pod/daemon-set-c8p65 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:53:00.844 I ns/e2e-daemonsets-9750 pod/daemon-set-c8p65 node/ostest-5xqm8-worker-0-twrlr container/app reason/Killing
Sep 09 08:53:12.377 W ns/e2e-daemonsets-9750 pod/daemon-set-c8p65 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:53:13.568 I ns/e2e-daemonsets-7406 pod/daemon-set-gwfz4 node/ reason/Created
Sep 09 08:53:13.580 I ns/e2e-daemonsets-7406 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-gwfz4
Sep 09 08:53:13.611 I ns/e2e-daemonsets-7406 pod/daemon-set-ltht8 node/ reason/Created
Sep 09 08:53:13.614 I ns/e2e-daemonsets-7406 pod/daemon-set-gwfz4 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:53:13.634 I ns/e2e-daemonsets-7406 pod/daemon-set-nxxq4 node/ reason/Created
Sep 09 08:53:13.634 I ns/e2e-daemonsets-7406 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-ltht8
Sep 09 08:53:13.657 I ns/e2e-daemonsets-7406 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-nxxq4
Sep 09 08:53:13.674 I ns/e2e-daemonsets-7406 pod/daemon-set-ltht8 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:53:13.738 I ns/e2e-daemonsets-7406 pod/daemon-set-nxxq4 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:53:35.811 I ns/e2e-daemonsets-7406 pod/daemon-set-ltht8 reason/AddedInterface Add eth0 [10.128.118.117/23]
Sep 09 08:53:35.886 I ns/e2e-daemonsets-7406 pod/daemon-set-gwfz4 reason/AddedInterface Add eth0 [10.128.119.57/23]
Sep 09 08:53:36.548 I ns/e2e-daemonsets-7406 pod/daemon-set-ltht8 node/ostest-5xqm8-worker-0-rzx47 container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:53:36.616 I ns/e2e-daemonsets-7406 pod/daemon-set-gwfz4 node/ostest-5xqm8-worker-0-twrlr container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:53:36.771 I ns/e2e-daemonsets-7406 pod/daemon-set-ltht8 node/ostest-5xqm8-worker-0-rzx47 container/app reason/Created
Sep 09 08:53:36.848 I ns/e2e-daemonsets-7406 pod/daemon-set-ltht8 node/ostest-5xqm8-worker-0-rzx47 container/app reason/Started
Sep 09 08:53:37.141 I ns/e2e-daemonsets-7406 pod/daemon-set-gwfz4 node/ostest-5xqm8-worker-0-twrlr container/app reason/Created
Sep 09 08:53:37.202 I ns/e2e-daemonsets-7406 pod/daemon-set-gwfz4 node/ostest-5xqm8-worker-0-twrlr container/app reason/Started
Sep 09 08:53:37.829 I ns/e2e-daemonsets-7406 pod/daemon-set-ltht8 node/ostest-5xqm8-worker-0-rzx47 container/app reason/Ready
Sep 09 08:53:38.199 I ns/e2e-daemonsets-7406 pod/daemon-set-gwfz4 node/ostest-5xqm8-worker-0-twrlr container/app reason/Ready
Sep 09 08:53:41.774 I ns/e2e-daemonsets-7406 pod/daemon-set-nxxq4 reason/AddedInterface Add eth0 [10.128.118.101/23]
Sep 09 08:53:42.502 I ns/e2e-daemonsets-7406 pod/daemon-set-nxxq4 node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:53:42.862 I ns/e2e-daemonsets-7406 pod/daemon-set-nxxq4 node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Created
Sep 09 08:53:42.911 I ns/e2e-daemonsets-7406 pod/daemon-set-nxxq4 node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Started
Sep 09 08:53:43.262 I ns/e2e-daemonsets-7406 pod/daemon-set-nxxq4 node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Ready
Sep 09 08:53:43.813 W ns/e2e-daemonsets-7406 pod/daemon-set-gwfz4 node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:53:43.836 I ns/e2e-daemonsets-7406 daemonset/daemon-set reason/SuccessfulDelete Deleted pod: daemon-set-gwfz4
Sep 09 08:53:43.849 I ns/e2e-daemonsets-7406 pod/daemon-set-gwfz4 node/ostest-5xqm8-worker-0-twrlr container/app reason/Killing
Sep 09 08:53:52.376 W ns/e2e-daemonsets-7406 pod/daemon-set-gwfz4 node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:53:52.448 I ns/e2e-daemonsets-7406 pod/daemon-set-2ngxn node/ reason/Created
Sep 09 08:53:52.483 I ns/e2e-daemonsets-7406 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-2ngxn
Sep 09 08:53:52.573 I ns/e2e-daemonsets-7406 pod/daemon-set-2ngxn node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:53:55.995 I ns/e2e-daemonsets-7406 pod/daemon-set-2ngxn reason/AddedInterface Add eth0 [10.128.118.250/23]
Sep 09 08:53:56.808 I ns/e2e-daemonsets-7406 pod/daemon-set-2ngxn node/ostest-5xqm8-worker-0-twrlr container/app reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:53:57.176 I ns/e2e-daemonsets-7406 pod/daemon-set-2ngxn node/ostest-5xqm8-worker-0-twrlr container/app reason/Created
Sep 09 08:53:57.267 I ns/e2e-daemonsets-7406 pod/daemon-set-2ngxn node/ostest-5xqm8-worker-0-twrlr container/app reason/Started
Sep 09 08:53:57.338 I ns/e2e-daemonsets-7406 pod/daemon-set-2ngxn node/ostest-5xqm8-worker-0-twrlr container/app reason/Ready
Sep 09 08:53:57.377 W ns/e2e-daemonsets-7406 pod/daemon-set-ltht8 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:53:57.390 I ns/e2e-daemonsets-7406 pod/daemon-set-ltht8 node/ostest-5xqm8-worker-0-rzx47 container/app reason/Killing
Sep 09 08:53:57.408 I ns/e2e-daemonsets-7406 daemonset/daemon-set reason/SuccessfulDelete Deleted pod: daemon-set-ltht8
Sep 09 08:53:58.858 W ns/e2e-daemonsets-7406 pod/daemon-set-ltht8 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:53:58.858 W ns/e2e-daemonsets-7406 pod/daemon-set-ltht8 node/ostest-5xqm8-worker-0-rzx47 container/app reason/NotReady
Sep 09 08:53:59.906 W ns/e2e-daemonsets-7406 pod/daemon-set-ltht8 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:53:59.947 I ns/e2e-daemonsets-7406 pod/daemon-set-768bk node/ reason/Created
Sep 09 08:53:59.967 I ns/e2e-daemonsets-7406 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-768bk
Sep 09 08:54:00.066 I ns/e2e-daemonsets-7406 pod/daemon-set-768bk node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:54:03.156 I ns/e2e-daemonsets-7406 pod/daemon-set-768bk reason/AddedInterface Add eth0 [10.128.119.228/23]
Sep 09 08:54:03.885 I ns/e2e-daemonsets-7406 pod/daemon-set-768bk node/ostest-5xqm8-worker-0-rzx47 container/app reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:54:04.080 I ns/e2e-daemonsets-7406 pod/daemon-set-768bk node/ostest-5xqm8-worker-0-rzx47 container/app reason/Created
Sep 09 08:54:04.134 I ns/e2e-daemonsets-7406 pod/daemon-set-768bk node/ostest-5xqm8-worker-0-rzx47 container/app reason/Started
Sep 09 08:54:04.906 I ns/e2e-daemonsets-7406 pod/daemon-set-768bk node/ostest-5xqm8-worker-0-rzx47 container/app reason/Ready
Sep 09 08:54:04.937 W ns/e2e-daemonsets-7406 pod/daemon-set-nxxq4 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:54:04.952 I ns/e2e-daemonsets-7406 daemonset/daemon-set reason/SuccessfulDelete Deleted pod: daemon-set-nxxq4
Sep 09 08:54:04.972 I ns/e2e-daemonsets-7406 pod/daemon-set-nxxq4 node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Killing
Sep 09 08:54:06.367 W ns/e2e-daemonsets-7406 pod/daemon-set-nxxq4 node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:54:06.367 W ns/e2e-daemonsets-7406 pod/daemon-set-nxxq4 node/ostest-5xqm8-worker-0-cbbx9 container/app reason/NotReady
Sep 09 08:54:07.354 W ns/e2e-daemonsets-7406 pod/daemon-set-nxxq4 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:54:07.404 I ns/e2e-daemonsets-7406 pod/daemon-set-swnph node/ reason/Created
Sep 09 08:54:07.412 I ns/e2e-daemonsets-7406 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-swnph
Sep 09 08:54:07.443 I ns/e2e-daemonsets-7406 pod/daemon-set-swnph node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:54:10.387 I ns/e2e-daemonsets-7406 pod/daemon-set-swnph reason/AddedInterface Add eth0 [10.128.119.92/23]
Sep 09 08:54:11.089 I ns/e2e-daemonsets-7406 pod/daemon-set-swnph node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20
Sep 09 08:54:11.362 I ns/e2e-daemonsets-7406 pod/daemon-set-swnph node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Created
Sep 09 08:54:11.412 I ns/e2e-daemonsets-7406 pod/daemon-set-swnph node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Started
Sep 09 08:54:12.371 I ns/e2e-daemonsets-7406 pod/daemon-set-swnph node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Ready
Sep 09 08:54:13.141 W ns/e2e-daemonsets-7406 pod/daemon-set-swnph node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:54:13.143 W ns/e2e-daemonsets-7406 pod/daemon-set-2ngxn node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:54:13.161 W ns/e2e-daemonsets-7406 pod/daemon-set-768bk node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:54:13.188 I ns/e2e-daemonsets-7406 pod/daemon-set-768bk node/ostest-5xqm8-worker-0-rzx47 container/app reason/Killing
Sep 09 08:54:13.198 I ns/e2e-daemonsets-7406 pod/daemon-set-2ngxn node/ostest-5xqm8-worker-0-twrlr container/app reason/Killing
Sep 09 08:54:14.361 I ns/e2e-daemonsets-7406 pod/daemon-set-swnph node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Killing
Sep 09 08:54:14.524 W ns/e2e-daemonsets-7406 pod/daemon-set-2ngxn node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:54:14.524 W ns/e2e-daemonsets-7406 pod/daemon-set-2ngxn node/ostest-5xqm8-worker-0-twrlr container/app reason/NotReady
Sep 09 08:54:14.950 W ns/e2e-daemonsets-7406 pod/daemon-set-768bk node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:54:14.950 W ns/e2e-daemonsets-7406 pod/daemon-set-768bk node/ostest-5xqm8-worker-0-rzx47 container/app reason/NotReady
Sep 09 08:54:15.385 E ns/e2e-daemonsets-7406 pod/daemon-set-swnph node/ostest-5xqm8-worker-0-cbbx9 container/app container exited with code 2 (Error): 
Sep 09 08:54:16.338 W ns/e2e-daemonsets-7406 pod/daemon-set-768bk node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:54:22.377 W ns/e2e-daemonsets-7406 pod/daemon-set-2ngxn node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:54:26.804 W ns/e2e-daemonsets-7406 pod/daemon-set-swnph node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:54:28.077 I ns/e2e-daemonsets-3894 pod/daemon-set-nxxnr node/ reason/Created
Sep 09 08:54:28.095 I ns/e2e-daemonsets-3894 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-nxxnr
Sep 09 08:54:28.132 I ns/e2e-daemonsets-3894 pod/daemon-set-bxzwm node/ reason/Created
Sep 09 08:54:28.148 I ns/e2e-daemonsets-3894 pod/daemon-set-dkqz7 node/ reason/Created
Sep 09 08:54:28.150 I ns/e2e-daemonsets-3894 pod/daemon-set-nxxnr node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:54:28.158 I ns/e2e-daemonsets-3894 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-bxzwm
Sep 09 08:54:28.190 I ns/e2e-daemonsets-3894 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-dkqz7
Sep 09 08:54:28.265 I ns/e2e-daemonsets-3894 pod/daemon-set-bxzwm node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:54:28.270 I ns/e2e-daemonsets-3894 pod/daemon-set-dkqz7 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:54:50.403 I ns/e2e-daemonsets-3894 pod/daemon-set-nxxnr reason/AddedInterface Add eth0 [10.128.120.83/23]
Sep 09 08:54:51.159 I ns/e2e-daemonsets-3894 pod/daemon-set-nxxnr node/ostest-5xqm8-worker-0-twrlr container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:54:51.561 I ns/e2e-daemonsets-3894 pod/daemon-set-nxxnr node/ostest-5xqm8-worker-0-twrlr container/app reason/Created
Sep 09 08:54:51.655 I ns/e2e-daemonsets-3894 pod/daemon-set-nxxnr node/ostest-5xqm8-worker-0-twrlr container/app reason/Started
Sep 09 08:54:51.799 I ns/e2e-daemonsets-3894 pod/daemon-set-nxxnr node/ostest-5xqm8-worker-0-twrlr container/app reason/Ready
Sep 09 08:54:53.395 I ns/e2e-daemonsets-3894 pod/daemon-set-dkqz7 reason/AddedInterface Add eth0 [10.128.120.70/23]
Sep 09 08:54:54.118 I ns/e2e-daemonsets-3894 pod/daemon-set-dkqz7 node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:54:54.428 I ns/e2e-daemonsets-3894 pod/daemon-set-dkqz7 node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Created
Sep 09 08:54:54.568 I ns/e2e-daemonsets-3894 pod/daemon-set-dkqz7 node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Started
Sep 09 08:54:55.219 I ns/e2e-daemonsets-3894 pod/daemon-set-bxzwm reason/AddedInterface Add eth0 [10.128.120.54/23]
Sep 09 08:54:55.571 I ns/e2e-daemonsets-3894 pod/daemon-set-dkqz7 node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Ready
Sep 09 08:54:55.808 I ns/e2e-daemonsets-3894 pod/daemon-set-bxzwm node/ostest-5xqm8-worker-0-rzx47 container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:54:56.092 I ns/e2e-daemonsets-3894 pod/daemon-set-bxzwm node/ostest-5xqm8-worker-0-rzx47 container/app reason/Created
Sep 09 08:54:56.148 I ns/e2e-daemonsets-3894 pod/daemon-set-bxzwm node/ostest-5xqm8-worker-0-rzx47 container/app reason/Started
Sep 09 08:54:57.101 I ns/e2e-daemonsets-3894 pod/daemon-set-bxzwm node/ostest-5xqm8-worker-0-rzx47 container/app reason/Ready
Sep 09 08:54:58.105 W ns/e2e-daemonsets-3894 pod/daemon-set-bxzwm node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:54:59.094 I ns/e2e-daemonsets-3894 pod/daemon-set-bxzwm node/ostest-5xqm8-worker-0-rzx47 container/app reason/Killing
Sep 09 08:55:07.361 W ns/e2e-daemonsets-3894 pod/daemon-set-bxzwm node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:55:07.446 I ns/e2e-daemonsets-3894 pod/daemon-set-k5g4q node/ reason/Created
Sep 09 08:55:07.522 I ns/e2e-daemonsets-3894 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-k5g4q
Sep 09 08:55:07.605 I ns/e2e-daemonsets-3894 pod/daemon-set-k5g4q node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:55:10.494 I ns/e2e-daemonsets-3894 pod/daemon-set-k5g4q reason/AddedInterface Add eth0 [10.128.121.25/23]
Sep 09 08:55:11.294 I ns/e2e-daemonsets-3894 pod/daemon-set-k5g4q node/ostest-5xqm8-worker-0-rzx47 container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:55:11.557 I ns/e2e-daemonsets-3894 pod/daemon-set-k5g4q node/ostest-5xqm8-worker-0-rzx47 container/app reason/Created
Sep 09 08:55:11.599 I ns/e2e-daemonsets-3894 pod/daemon-set-k5g4q node/ostest-5xqm8-worker-0-rzx47 container/app reason/Started
Sep 09 08:55:12.209 I ns/e2e-daemonsets-3894 pod/daemon-set-k5g4q node/ostest-5xqm8-worker-0-rzx47 container/app reason/Ready
Sep 09 08:55:12.528 W ns/e2e-daemonsets-3894 pod/daemon-set-dkqz7 node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:55:12.531 W ns/e2e-daemonsets-3894 pod/daemon-set-k5g4q node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:55:12.550 I ns/e2e-daemonsets-3894 pod/daemon-set-dkqz7 node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Killing
Sep 09 08:55:12.556 W ns/e2e-daemonsets-3894 pod/daemon-set-nxxnr node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:55:12.603 I ns/e2e-daemonsets-3894 pod/daemon-set-nxxnr node/ostest-5xqm8-worker-0-twrlr container/app reason/Killing
Sep 09 08:55:14.176 I ns/e2e-daemonsets-3894 pod/daemon-set-k5g4q node/ostest-5xqm8-worker-0-rzx47 container/app reason/Killing
Sep 09 08:55:15.015 W ns/e2e-daemonsets-3894 pod/daemon-set-nxxnr node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:55:16.806 W ns/e2e-daemonsets-3894 pod/daemon-set-dkqz7 node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:55:27.223 W ns/e2e-daemonsets-3894 pod/daemon-set-k5g4q node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:55:44.355 I ns/e2e-sched-pred-5827 pod/without-label node/ reason/Created
Sep 09 08:55:44.400 I ns/e2e-sched-pred-5827 pod/without-label node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:56:07.495 I ns/e2e-sched-pred-5827 pod/without-label reason/AddedInterface Add eth0 [10.128.123.84/23]
Sep 09 08:56:08.194 I ns/e2e-sched-pred-5827 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:56:08.435 I ns/e2e-sched-pred-5827 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Created
Sep 09 08:56:08.483 I ns/e2e-sched-pred-5827 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Started
Sep 09 08:56:09.376 I ns/e2e-sched-pred-5827 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Ready
Sep 09 08:56:10.424 W ns/e2e-sched-pred-5827 pod/without-label node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:56:10.551 I ns/e2e-sched-pred-5827 pod/pod1 node/ reason/Created
Sep 09 08:56:10.602 I ns/e2e-sched-pred-5827 pod/pod1 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:56:11.369 I ns/e2e-sched-pred-5827 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Killing
Sep 09 08:56:11.833 W ns/e2e-sched-pred-5827 pod/without-label node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:56:11.855 I ns/e2e-sched-pred-5827 pod/without-label node/ostest-5xqm8-worker-0-rzx47 container/without-label reason/Killing
Sep 09 08:56:13.698 I ns/e2e-sched-pred-5827 pod/pod1 reason/AddedInterface Add eth0 [10.128.122.71/23]
Sep 09 08:56:14.338 I ns/e2e-sched-pred-5827 pod/pod1 node/ostest-5xqm8-worker-0-rzx47 container/pod1 reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:56:14.585 I ns/e2e-sched-pred-5827 pod/pod1 node/ostest-5xqm8-worker-0-rzx47 container/pod1 reason/Created
Sep 09 08:56:14.651 I ns/e2e-sched-pred-5827 pod/pod1 node/ostest-5xqm8-worker-0-rzx47 container/pod1 reason/Started
Sep 09 08:56:15.422 I ns/e2e-sched-pred-5827 pod/pod1 node/ostest-5xqm8-worker-0-rzx47 container/pod1 reason/Ready
Sep 09 08:56:16.611 I ns/e2e-sched-pred-5827 pod/pod2 node/ reason/Created
Sep 09 08:56:16.689 I ns/e2e-sched-pred-5827 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:56:19.718 I ns/e2e-sched-pred-5827 pod/pod2 reason/AddedInterface Add eth0 [10.128.122.95/23]
Sep 09 08:56:20.190 W ns/e2e-sched-pred-5827 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to add hostport mapping for sandbox k8s_pod2_e2e-sched-pred-5827_93f37260-f498-4a5a-a93c-60c565852e65_0(43043ef4f7198793e759d8b095ff6ffd5edf46a3c7031e979f1b10e488b7b6ee): cannot open hostport 54321 for pod k8s_pod2_e2e-sched-pred-5827_93f37260-f498-4a5a-a93c-60c565852e65_0_: listen tcp :54321: bind: address already in use
Sep 09 08:56:21.226 I ns/e2e-sched-pred-5827 pod/pod2 reason/AddedInterface Add eth0 [10.128.122.95/23]
Sep 09 08:56:21.900 I ns/e2e-sched-pred-5827 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 container/pod2 reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:56:22.183 I ns/e2e-sched-pred-5827 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 container/pod2 reason/Created
Sep 09 08:56:22.233 I ns/e2e-sched-pred-5827 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 container/pod2 reason/Started
Sep 09 08:56:22.445 I ns/e2e-sched-pred-5827 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 container/pod2 reason/Ready
Sep 09 08:56:22.667 I ns/e2e-sched-pred-5827 pod/pod3 node/ reason/Created
Sep 09 08:56:22.738 I ns/e2e-sched-pred-5827 pod/pod3 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:56:37.754 I ns/e2e-sched-pred-5827 pod/pod3 reason/AddedInterface Add eth0 [10.128.123.84/23]
Sep 09 08:56:38.403 I ns/e2e-sched-pred-5827 pod/pod3 node/ostest-5xqm8-worker-0-rzx47 container/pod3 reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:56:38.642 I ns/e2e-sched-pred-5827 pod/pod3 node/ostest-5xqm8-worker-0-rzx47 container/pod3 reason/Created
Sep 09 08:56:38.686 I ns/e2e-sched-pred-5827 pod/pod3 node/ostest-5xqm8-worker-0-rzx47 container/pod3 reason/Started
Sep 09 08:56:39.525 I ns/e2e-sched-pred-5827 pod/pod3 node/ostest-5xqm8-worker-0-rzx47 container/pod3 reason/Ready
Sep 09 08:56:41.965 I ns/e2e-daemonsets-8004 pod/daemon-set-ljjvk node/ reason/Created
Sep 09 08:56:41.983 I ns/e2e-daemonsets-8004 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-ljjvk
Sep 09 08:56:42.006 I ns/e2e-daemonsets-8004 pod/daemon-set-86vds node/ reason/Created
Sep 09 08:56:42.006 I ns/e2e-daemonsets-8004 pod/daemon-set-rhnwn node/ reason/Created
Sep 09 08:56:42.036 I ns/e2e-daemonsets-8004 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-rhnwn
Sep 09 08:56:42.055 I ns/e2e-daemonsets-8004 pod/daemon-set-ljjvk node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:56:42.079 I ns/e2e-daemonsets-8004 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-86vds
Sep 09 08:56:42.086 I ns/e2e-daemonsets-8004 pod/daemon-set-rhnwn node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:56:42.095 I ns/e2e-daemonsets-8004 pod/daemon-set-86vds node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:56:46.583 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (139 times)
Sep 09 08:56:48.290 W ns/e2e-sched-pred-5827 pod/pod1 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 1s
Sep 09 08:56:48.304 W ns/e2e-sched-pred-5827 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 1s
Sep 09 08:56:48.316 W ns/e2e-sched-pred-5827 pod/pod3 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 1s
Sep 09 08:56:49.661 W ns/e2e-sched-pred-5827 pod/pod1 node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:56:49.661 W ns/e2e-sched-pred-5827 pod/pod1 node/ostest-5xqm8-worker-0-rzx47 container/pod1 reason/NotReady
Sep 09 08:56:57.263 W ns/e2e-sched-pred-5827 pod/pod1 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:56:57.318 W ns/e2e-sched-pred-5827 pod/pod3 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:56:57.366 W ns/e2e-sched-pred-5827 pod/pod2 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:57:09.937 I ns/e2e-daemonsets-8004 pod/daemon-set-ljjvk reason/AddedInterface Add eth0 [10.128.119.142/23]
Sep 09 08:57:10.862 I ns/e2e-daemonsets-8004 pod/daemon-set-ljjvk node/ostest-5xqm8-worker-0-twrlr container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:57:11.530 I ns/e2e-daemonsets-8004 pod/daemon-set-ljjvk node/ostest-5xqm8-worker-0-twrlr container/app reason/Created
Sep 09 08:57:11.622 I ns/e2e-daemonsets-8004 pod/daemon-set-ljjvk node/ostest-5xqm8-worker-0-twrlr container/app reason/Started
Sep 09 08:57:11.858 I ns/e2e-daemonsets-8004 pod/daemon-set-ljjvk node/ostest-5xqm8-worker-0-twrlr container/app reason/Ready
Sep 09 08:57:12.110 I ns/e2e-daemonsets-8004 pod/daemon-set-rhnwn reason/AddedInterface Add eth0 [10.128.119.156/23]
Sep 09 08:57:12.172 I ns/e2e-daemonsets-8004 pod/daemon-set-86vds reason/AddedInterface Add eth0 [10.128.119.125/23]
Sep 09 08:57:12.942 I ns/e2e-daemonsets-8004 pod/daemon-set-rhnwn node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:57:12.949 I ns/e2e-daemonsets-8004 pod/daemon-set-86vds node/ostest-5xqm8-worker-0-rzx47 container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 08:57:13.185 I ns/e2e-daemonsets-8004 pod/daemon-set-86vds node/ostest-5xqm8-worker-0-rzx47 container/app reason/Created
Sep 09 08:57:13.202 I ns/e2e-daemonsets-8004 pod/daemon-set-rhnwn node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Created
Sep 09 08:57:13.274 I ns/e2e-daemonsets-8004 pod/daemon-set-rhnwn node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Started
Sep 09 08:57:13.313 I ns/e2e-daemonsets-8004 pod/daemon-set-86vds node/ostest-5xqm8-worker-0-rzx47 container/app reason/Started
Sep 09 08:57:13.681 I ns/e2e-daemonsets-8004 pod/daemon-set-86vds node/ostest-5xqm8-worker-0-rzx47 container/app reason/Ready
Sep 09 08:57:14.215 I ns/e2e-daemonsets-8004 pod/daemon-set-rhnwn node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Ready
Sep 09 08:57:14.999 E ns/e2e-daemonsets-8004 pod/daemon-set-86vds node/ostest-5xqm8-worker-0-rzx47 reason/Failed (): 
Sep 09 08:57:15.055 W ns/e2e-daemonsets-8004 daemonset/daemon-set reason/FailedDaemonPod Found failed daemon pod e2e-daemonsets-8004/daemon-set-86vds on node ostest-5xqm8-worker-0-rzx47, will try to kill it
Sep 09 08:57:15.072 W ns/e2e-daemonsets-8004 pod/daemon-set-86vds node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 0s
Sep 09 08:57:15.159 I ns/e2e-daemonsets-8004 daemonset/daemon-set reason/SuccessfulDelete Deleted pod: daemon-set-86vds
Sep 09 08:57:15.665 I ns/e2e-daemonsets-8004 pod/daemon-set-86vds node/ostest-5xqm8-worker-0-rzx47 container/app reason/Killing
Sep 09 08:57:16.856 W ns/e2e-daemonsets-8004 pod/daemon-set-86vds node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:57:16.870 I ns/e2e-daemonsets-8004 pod/daemon-set-86vds node/ostest-5xqm8-worker-0-rzx47 container/app reason/Killing
Sep 09 08:57:16.906 I ns/e2e-daemonsets-8004 pod/daemon-set-v76hb node/ reason/Created
Sep 09 08:57:16.939 I ns/e2e-daemonsets-8004 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-v76hb
Sep 09 08:57:16.970 I ns/e2e-daemonsets-8004 pod/daemon-set-v76hb node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:57:17.339 W ns/e2e-daemonsets-8004 pod/daemon-set-rhnwn node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 08:57:17.357 I ns/e2e-daemonsets-8004 pod/daemon-set-rhnwn node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Killing
Sep 09 08:57:17.364 W ns/e2e-daemonsets-8004 pod/daemon-set-ljjvk node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 08:57:17.380 W ns/e2e-daemonsets-8004 pod/daemon-set-v76hb node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 08:57:17.396 I ns/e2e-daemonsets-8004 pod/daemon-set-ljjvk node/ostest-5xqm8-worker-0-twrlr container/app reason/Killing
Sep 09 08:57:18.925 W ns/e2e-daemonsets-8004 pod/daemon-set-ljjvk node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:57:18.925 W ns/e2e-daemonsets-8004 pod/daemon-set-ljjvk node/ostest-5xqm8-worker-0-twrlr container/app reason/NotReady
Sep 09 08:57:19.245 W ns/e2e-daemonsets-8004 pod/daemon-set-rhnwn node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:57:19.245 W ns/e2e-daemonsets-8004 pod/daemon-set-rhnwn node/ostest-5xqm8-worker-0-cbbx9 container/app reason/NotReady
Sep 09 08:57:22.385 W ns/e2e-daemonsets-8004 pod/daemon-set-ljjvk node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:57:26.794 W ns/e2e-daemonsets-8004 pod/daemon-set-rhnwn node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:57:27.318 W ns/e2e-daemonsets-8004 pod/daemon-set-v76hb node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 08:58:29.262 I ns/e2e-sched-preemption-4087 pod/pod0-sched-preemption-low-priority node/ reason/Created
Sep 09 08:58:29.328 W ns/e2e-sched-preemption-4087 pod/pod0-sched-preemption-low-priority reason/FailedScheduling 0/6 nodes are available: 6 Insufficient scheduling.k8s.io/foo.
Sep 09 08:58:29.365 I ns/e2e-sched-preemption-4087 pod/pod1-sched-preemption-medium-priority node/ reason/Created
Sep 09 08:58:29.403 W ns/e2e-sched-preemption-4087 pod/pod0-sched-preemption-low-priority reason/FailedScheduling 0/6 nodes are available: 6 Insufficient scheduling.k8s.io/foo.
Sep 09 08:58:29.496 I ns/e2e-sched-preemption-4087 pod/pod2-sched-preemption-medium-priority node/ reason/Created
Sep 09 08:58:29.500 W ns/e2e-sched-preemption-4087 pod/pod1-sched-preemption-medium-priority reason/FailedScheduling 0/6 nodes are available: 6 Insufficient scheduling.k8s.io/foo.
Sep 09 08:58:29.653 W ns/e2e-sched-preemption-4087 pod/pod2-sched-preemption-medium-priority reason/FailedScheduling 0/6 nodes are available: 6 Insufficient scheduling.k8s.io/foo.
Sep 09 08:58:29.718 I ns/e2e-sched-preemption-4087 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 08:58:29.800 W ns/e2e-sched-preemption-4087 pod/pod2-sched-preemption-medium-priority reason/FailedScheduling 0/6 nodes are available: 6 Insufficient scheduling.k8s.io/foo.
Sep 09 08:58:32.275 I ns/e2e-sched-preemption-4087 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 08:58:32.307 I ns/e2e-sched-preemption-4087 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:58:49.767 I ns/e2e-sched-preemption-4087 pod/pod1-sched-preemption-medium-priority reason/AddedInterface Add eth0 [10.128.120.9/23]
Sep 09 08:58:50.447 I ns/e2e-sched-preemption-4087 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 container/pod1-sched-preemption-medium-priority reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:58:50.743 I ns/e2e-sched-preemption-4087 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 container/pod1-sched-preemption-medium-priority reason/Created
Sep 09 08:58:50.782 I ns/e2e-sched-preemption-4087 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 container/pod1-sched-preemption-medium-priority reason/Started
Sep 09 08:58:50.990 I ns/e2e-sched-preemption-4087 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 container/pod1-sched-preemption-medium-priority reason/Ready
Sep 09 08:58:51.410 I ns/e2e-sched-preemption-4087 pod/pod0-sched-preemption-low-priority reason/AddedInterface Add eth0 [10.128.121.244/23]
Sep 09 08:58:51.665 I ns/e2e-sched-preemption-4087 pod/pod2-sched-preemption-medium-priority reason/AddedInterface Add eth0 [10.128.121.146/23]
Sep 09 08:58:52.093 I ns/e2e-sched-preemption-4087 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 container/pod0-sched-preemption-low-priority reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:58:52.370 I ns/e2e-sched-preemption-4087 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr container/pod2-sched-preemption-medium-priority reason/Pulling image/k8s.gcr.io/pause:3.2
Sep 09 08:58:52.419 I ns/e2e-sched-preemption-4087 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 container/pod0-sched-preemption-low-priority reason/Created
Sep 09 08:58:52.494 I ns/e2e-sched-preemption-4087 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 container/pod0-sched-preemption-low-priority reason/Started
Sep 09 08:58:52.667 I ns/e2e-sched-preemption-4087 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 container/pod0-sched-preemption-low-priority reason/Ready
Sep 09 08:58:56.543 I ns/e2e-sched-preemption-4087 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr container/pod2-sched-preemption-medium-priority reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:58:56.823 I ns/e2e-sched-preemption-4087 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr container/pod2-sched-preemption-medium-priority reason/Created
Sep 09 08:58:56.882 I ns/e2e-sched-preemption-4087 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr container/pod2-sched-preemption-medium-priority reason/Started
Sep 09 08:58:57.633 I ns/e2e-sched-preemption-4087 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr container/pod2-sched-preemption-medium-priority reason/Ready
Sep 09 08:58:59.618 I ns/kube-system pod/critical-pod node/ reason/Created
Sep 09 08:58:59.656 W ns/e2e-sched-preemption-4087 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 1s
Sep 09 08:58:59.676 I ns/e2e-sched-preemption-4087 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 container/pod0-sched-preemption-low-priority reason/Killing
Sep 09 08:58:59.709 W ns/kube-system pod/critical-pod reason/FailedScheduling 0/6 nodes are available: 6 Insufficient scheduling.k8s.io/foo.
Sep 09 08:58:59.716 I ns/e2e-sched-preemption-4087 pod/pod0-sched-preemption-low-priority reason/Preempted Preempted by kube-system/critical-pod on node ostest-5xqm8-worker-0-cbbx9
Sep 09 08:58:59.727 W ns/kube-system pod/critical-pod reason/FailedScheduling 0/6 nodes are available: 6 Insufficient scheduling.k8s.io/foo.
Sep 09 08:59:06.788 W ns/e2e-sched-preemption-4087 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:59:06.821 I ns/kube-system pod/critical-pod node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 08:59:18.921 I ns/kube-system pod/critical-pod reason/AddedInterface Add eth0 [10.128.57.216/23]
Sep 09 08:59:19.600 I ns/kube-system pod/critical-pod node/ostest-5xqm8-worker-0-cbbx9 container/critical-pod reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 08:59:19.906 I ns/kube-system pod/critical-pod node/ostest-5xqm8-worker-0-cbbx9 container/critical-pod reason/Created
Sep 09 08:59:19.989 I ns/kube-system pod/critical-pod node/ostest-5xqm8-worker-0-cbbx9 container/critical-pod reason/Started
Sep 09 08:59:20.828 I ns/kube-system pod/critical-pod node/ostest-5xqm8-worker-0-cbbx9 container/critical-pod reason/Ready
Sep 09 08:59:21.675 W ns/kube-system pod/critical-pod node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 08:59:22.939 I ns/kube-system pod/critical-pod node/ostest-5xqm8-worker-0-cbbx9 container/critical-pod reason/Killing
Sep 09 08:59:23.094 W ns/kube-system pod/critical-pod node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 08:59:28.404 W ns/e2e-sched-preemption-4087 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 1s
Sep 09 08:59:28.418 W ns/e2e-sched-preemption-4087 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 1s
Sep 09 08:59:29.960 W ns/e2e-sched-preemption-4087 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:59:29.960 W ns/e2e-sched-preemption-4087 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr container/pod2-sched-preemption-medium-priority reason/NotReady
Sep 09 08:59:30.171 W ns/e2e-sched-preemption-4087 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 08:59:30.171 W ns/e2e-sched-preemption-4087 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 container/pod1-sched-preemption-medium-priority reason/NotReady
Sep 09 08:59:30.976 W ns/e2e-sched-preemption-4087 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 08:59:33.968 W ns/e2e-sched-preemption-4087 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 pod has been pending longer than a minute
Sep 09 08:59:37.265 W ns/e2e-sched-preemption-4087 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 09:00:12.269 W ns/openshift-kuryr pod/kuryr-cni-f78cf node/ostest-5xqm8-worker-0-twrlr reason/Unhealthy Liveness probe failed: Get "http://10.196.3.122:8090/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (2 times)
Sep 09 09:00:23.968 I ns/e2e-taint-single-pod-6919 pod/taint-eviction-4 node/ reason/Created
Sep 09 09:00:24.002 I ns/e2e-taint-single-pod-6919 pod/taint-eviction-4 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 09:00:24.265 I ns/e2e-taint-single-pod-6919 pod/taint-eviction-4 reason/TaintManagerEviction Cancelling deletion of Pod e2e-taint-single-pod-6919/taint-eviction-4
Sep 09 09:00:40.044 I ns/e2e-taint-single-pod-6919 pod/taint-eviction-4 reason/AddedInterface Add eth0 [10.128.118.183/23]
Sep 09 09:00:40.672 I ns/e2e-taint-single-pod-6919 pod/taint-eviction-4 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 09:00:40.915 I ns/e2e-taint-single-pod-6919 pod/taint-eviction-4 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Created
Sep 09 09:00:41.020 I ns/e2e-taint-single-pod-6919 pod/taint-eviction-4 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Started
Sep 09 09:00:41.440 I ns/e2e-taint-single-pod-6919 pod/taint-eviction-4 node/ostest-5xqm8-worker-0-rzx47 container/pause reason/Ready
Sep 09 09:00:52.158 W ns/openshift-apiserver pod/apiserver-69bcf577dc-zfmrx node/ostest-5xqm8-master-0 reason/Unhealthy Liveness probe failed: Get "https://10.128.102.229:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) (3 times)
Sep 09 09:01:46.553 W ns/e2e-taint-single-pod-6919 pod/taint-eviction-4 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 09:01:57.226 W ns/e2e-taint-single-pod-6919 pod/taint-eviction-4 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 09:02:40.829 I ns/e2e-sched-preemption-1925 pod/pod0-sched-preemption-low-priority node/ reason/Created
Sep 09 09:02:40.874 I ns/e2e-sched-preemption-1925 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 09:02:40.897 I ns/e2e-sched-preemption-1925 pod/pod1-sched-preemption-medium-priority node/ reason/Created
Sep 09 09:02:40.945 I ns/e2e-sched-preemption-1925 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 09:02:40.962 I ns/e2e-sched-preemption-1925 pod/pod2-sched-preemption-medium-priority node/ reason/Created
Sep 09 09:02:40.990 I ns/e2e-sched-preemption-1925 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 09:02:55.050 I ns/e2e-sched-preemption-1925 pod/pod0-sched-preemption-low-priority reason/AddedInterface Add eth0 [10.128.121.125/23]
Sep 09 09:02:55.508 I ns/e2e-sched-preemption-1925 pod/pod2-sched-preemption-medium-priority reason/AddedInterface Add eth0 [10.128.120.176/23]
Sep 09 09:02:55.766 I ns/e2e-sched-preemption-1925 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 container/pod0-sched-preemption-low-priority reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 09:02:56.054 I ns/e2e-sched-preemption-1925 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 container/pod0-sched-preemption-low-priority reason/Created
Sep 09 09:02:56.104 I ns/e2e-sched-preemption-1925 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 container/pod0-sched-preemption-low-priority reason/Started
Sep 09 09:02:56.334 W ns/openshift-kuryr pod/kuryr-controller-5c7b79dcdb-r7fhz node/ostest-5xqm8-master-1 reason/Unhealthy Liveness probe failed: Get "http://10.196.3.65:8091/alive": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (141 times)
Sep 09 09:02:56.549 I ns/e2e-sched-preemption-1925 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr container/pod2-sched-preemption-medium-priority reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 09:02:56.710 I ns/e2e-sched-preemption-1925 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 container/pod0-sched-preemption-low-priority reason/Ready
Sep 09 09:02:56.829 I ns/e2e-sched-preemption-1925 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr container/pod2-sched-preemption-medium-priority reason/Created
Sep 09 09:02:56.894 I ns/e2e-sched-preemption-1925 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr container/pod2-sched-preemption-medium-priority reason/Started
Sep 09 09:02:57.809 I ns/e2e-sched-preemption-1925 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr container/pod2-sched-preemption-medium-priority reason/Ready
Sep 09 09:02:59.071 I ns/e2e-sched-preemption-1925 pod/pod1-sched-preemption-medium-priority reason/AddedInterface Add eth0 [10.128.120.155/23]
Sep 09 09:02:59.762 I ns/e2e-sched-preemption-1925 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 container/pod1-sched-preemption-medium-priority reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 09:03:00.061 I ns/e2e-sched-preemption-1925 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 container/pod1-sched-preemption-medium-priority reason/Created
Sep 09 09:03:00.104 I ns/e2e-sched-preemption-1925 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 container/pod1-sched-preemption-medium-priority reason/Started
Sep 09 09:03:00.844 I ns/e2e-sched-preemption-1925 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 container/pod1-sched-preemption-medium-priority reason/Ready
Sep 09 09:03:01.072 I ns/e2e-sched-preemption-1925 pod/preemptor-pod node/ reason/Created
Sep 09 09:03:01.144 W ns/e2e-sched-preemption-1925 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 1s
Sep 09 09:03:01.163 I ns/e2e-sched-preemption-1925 pod/pod0-sched-preemption-low-priority reason/Preempted Preempted by e2e-sched-preemption-1925/preemptor-pod on node ostest-5xqm8-worker-0-cbbx9
Sep 09 09:03:01.169 I ns/e2e-sched-preemption-1925 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 container/pod0-sched-preemption-low-priority reason/Killing
Sep 09 09:03:01.169 W ns/e2e-sched-preemption-1925 pod/preemptor-pod reason/FailedScheduling 0/6 nodes are available: 6 Insufficient scheduling.k8s.io/foo.
Sep 09 09:03:01.198 W ns/e2e-sched-preemption-1925 pod/preemptor-pod reason/FailedScheduling 0/6 nodes are available: 6 Insufficient scheduling.k8s.io/foo.
Sep 09 09:03:02.796 W ns/e2e-sched-preemption-1925 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 09:03:02.796 W ns/e2e-sched-preemption-1925 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 container/pod0-sched-preemption-low-priority reason/NotReady
Sep 09 09:03:03.530 W ns/e2e-sched-preemption-1925 pod/preemptor-pod reason/FailedScheduling 0/6 nodes are available: 6 Insufficient scheduling.k8s.io/foo.
Sep 09 09:03:06.782 W ns/e2e-sched-preemption-1925 pod/pod0-sched-preemption-low-priority node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 09:03:08.508 I ns/e2e-sched-preemption-1925 pod/preemptor-pod node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 09:03:11.594 I ns/e2e-sched-preemption-1925 pod/preemptor-pod reason/AddedInterface Add eth0 [10.128.121.147/23]
Sep 09 09:03:12.208 I ns/e2e-sched-preemption-1925 pod/preemptor-pod node/ostest-5xqm8-worker-0-cbbx9 container/preemptor-pod reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 09:03:12.538 I ns/e2e-sched-preemption-1925 pod/preemptor-pod node/ostest-5xqm8-worker-0-cbbx9 container/preemptor-pod reason/Created
Sep 09 09:03:12.584 I ns/e2e-sched-preemption-1925 pod/preemptor-pod node/ostest-5xqm8-worker-0-cbbx9 container/preemptor-pod reason/Started
Sep 09 09:03:12.828 I ns/e2e-sched-preemption-1925 pod/preemptor-pod node/ostest-5xqm8-worker-0-cbbx9 container/preemptor-pod reason/Ready
Sep 09 09:03:14.377 I ns/e2e-svc-latency-7792 pod/svc-latency-rc-zllc2 node/ reason/Created
Sep 09 09:03:14.403 I ns/e2e-svc-latency-7792 replicationcontroller/svc-latency-rc reason/SuccessfulCreate Created pod: svc-latency-rc-zllc2
Sep 09 09:03:14.417 I ns/e2e-svc-latency-7792 pod/svc-latency-rc-zllc2 node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 09:03:21.708 W ns/e2e-sched-preemption-1925 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 1s
Sep 09 09:03:21.727 W ns/e2e-sched-preemption-1925 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 1s
Sep 09 09:03:21.744 W ns/e2e-sched-preemption-1925 pod/preemptor-pod node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 1s
Sep 09 09:03:23.955 W ns/e2e-sched-preemption-1925 pod/pod1-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 09:03:26.778 W ns/e2e-sched-preemption-1925 pod/preemptor-pod node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 09:03:32.374 W ns/e2e-sched-preemption-1925 pod/pod2-sched-preemption-medium-priority node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 09:03:38.518 I ns/e2e-svc-latency-7792 pod/svc-latency-rc-zllc2 reason/AddedInterface Add eth0 [10.128.118.82/23]
Sep 09 09:03:39.126 I ns/e2e-svc-latency-7792 pod/svc-latency-rc-zllc2 node/ostest-5xqm8-worker-0-rzx47 container/svc-latency-rc reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 09:03:39.399 I ns/e2e-svc-latency-7792 pod/svc-latency-rc-zllc2 node/ostest-5xqm8-worker-0-rzx47 container/svc-latency-rc reason/Created
Sep 09 09:03:39.448 I ns/e2e-svc-latency-7792 pod/svc-latency-rc-zllc2 node/ostest-5xqm8-worker-0-rzx47 container/svc-latency-rc reason/Started
Sep 09 09:03:39.989 I ns/e2e-svc-latency-7792 pod/svc-latency-rc-zllc2 node/ostest-5xqm8-worker-0-rzx47 container/svc-latency-rc reason/Ready
Sep 09 09:03:48.125 I ns/e2e-sched-pred-8361 pod/restricted-pod node/ reason/Created
Sep 09 09:03:48.157 W ns/e2e-sched-pred-8361 pod/restricted-pod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 09:03:48.200 W ns/e2e-sched-pred-8361 pod/restricted-pod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 09:03:50.611 W ns/e2e-sched-pred-8361 pod/restricted-pod reason/FailedScheduling 0/6 nodes are available: 6 node(s) didn't match node selector.
Sep 09 09:03:51.037 I ns/e2e-nsdeletetest-4806 pod/test-pod node/ reason/Created
Sep 09 09:03:51.080 I ns/e2e-nsdeletetest-4806 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 09:03:51.980 W ns/e2e-svc-latency-7792 pod/svc-latency-rc-zllc2 node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 1s
Sep 09 09:03:57.818 W ns/e2e-sched-pred-8361 pod/restricted-pod node/ reason/GracefulDelete in 0s
Sep 09 09:03:57.840 W ns/e2e-sched-pred-8361 pod/restricted-pod node/ reason/Deleted
Sep 09 09:04:12.843 W ns/e2e-svc-latency-7792 pod/svc-latency-rc-zllc2 node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 09:04:39.190 I ns/e2e-nsdeletetest-4806 pod/test-pod reason/AddedInterface Add eth0 [10.128.123.44/23]
Sep 09 09:04:39.756 I ns/e2e-nsdeletetest-4806 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/nginx reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 09:04:39.993 I ns/e2e-nsdeletetest-4806 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/nginx reason/Created
Sep 09 09:04:40.055 I ns/e2e-nsdeletetest-4806 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/nginx reason/Started
Sep 09 09:04:40.177 I ns/e2e-nsdeletetest-4806 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/nginx reason/Ready
Sep 09 09:04:48.350 W ns/e2e-nsdeletetest-4806 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 09:04:50.228 W ns/e2e-nsdeletetest-4806 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 09:04:50.228 W ns/e2e-nsdeletetest-4806 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 container/nginx reason/NotReady
Sep 09 09:04:57.231 W ns/e2e-nsdeletetest-4806 pod/test-pod node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 09:05:12.744 I ns/e2e-daemonsets-8157 pod/daemon-set-88gzd node/ reason/Created
Sep 09 09:05:12.773 I ns/e2e-daemonsets-8157 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-88gzd
Sep 09 09:05:12.828 I ns/e2e-daemonsets-8157 pod/daemon-set-85s2z node/ reason/Created
Sep 09 09:05:12.866 I ns/e2e-daemonsets-8157 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-85s2z
Sep 09 09:05:12.884 I ns/e2e-daemonsets-8157 pod/daemon-set-2jmxk node/ reason/Created
Sep 09 09:05:12.908 I ns/e2e-daemonsets-8157 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-2jmxk
Sep 09 09:05:12.953 I ns/e2e-daemonsets-8157 pod/daemon-set-85s2z node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 09:05:12.968 I ns/e2e-daemonsets-8157 pod/daemon-set-88gzd node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 09:05:12.981 I ns/e2e-daemonsets-8157 pod/daemon-set-2jmxk node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 09:05:34.911 I ns/e2e-daemonsets-8157 pod/daemon-set-85s2z reason/AddedInterface Add eth0 [10.128.118.241/23]
Sep 09 09:05:35.585 I ns/e2e-daemonsets-8157 pod/daemon-set-85s2z node/ostest-5xqm8-worker-0-rzx47 container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 09:05:35.902 I ns/e2e-daemonsets-8157 pod/daemon-set-85s2z node/ostest-5xqm8-worker-0-rzx47 container/app reason/Created
Sep 09 09:05:35.950 I ns/e2e-daemonsets-8157 pod/daemon-set-85s2z node/ostest-5xqm8-worker-0-rzx47 container/app reason/Started
Sep 09 09:05:35.986 I ns/e2e-daemonsets-8157 pod/daemon-set-88gzd reason/AddedInterface Add eth0 [10.128.118.205/23]
Sep 09 09:05:36.097 I ns/e2e-daemonsets-8157 pod/daemon-set-2jmxk reason/AddedInterface Add eth0 [10.128.118.113/23]
Sep 09 09:05:36.366 I ns/e2e-daemonsets-8157 pod/daemon-set-85s2z node/ostest-5xqm8-worker-0-rzx47 container/app reason/Ready
Sep 09 09:05:36.888 I ns/e2e-daemonsets-8157 pod/daemon-set-2jmxk node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 09:05:36.922 I ns/e2e-daemonsets-8157 pod/daemon-set-88gzd node/ostest-5xqm8-worker-0-twrlr container/app reason/Pulled image/docker.io/library/httpd:2.4.38-alpine
Sep 09 09:05:37.217 I ns/e2e-daemonsets-8157 pod/daemon-set-2jmxk node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Created
Sep 09 09:05:37.249 I ns/e2e-daemonsets-8157 pod/daemon-set-88gzd node/ostest-5xqm8-worker-0-twrlr container/app reason/Created
Sep 09 09:05:37.329 I ns/e2e-daemonsets-8157 pod/daemon-set-2jmxk node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Started
Sep 09 09:05:37.340 I ns/e2e-daemonsets-8157 pod/daemon-set-88gzd node/ostest-5xqm8-worker-0-twrlr container/app reason/Started
Sep 09 09:05:37.430 I ns/e2e-daemonsets-8157 pod/daemon-set-2jmxk node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Ready
Sep 09 09:05:38.021 I ns/e2e-daemonsets-8157 pod/daemon-set-88gzd node/ostest-5xqm8-worker-0-twrlr container/app reason/Ready
Sep 09 09:05:38.852 W ns/e2e-daemonsets-8157 pod/daemon-set-88gzd node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 09:05:38.866 I ns/e2e-daemonsets-8157 daemonset/daemon-set reason/SuccessfulDelete Deleted pod: daemon-set-88gzd
Sep 09 09:05:40.048 I ns/e2e-daemonsets-8157 pod/daemon-set-88gzd node/ostest-5xqm8-worker-0-twrlr container/app reason/Killing
Sep 09 09:05:42.095 W ns/e2e-daemonsets-8157 pod/daemon-set-88gzd node/ostest-5xqm8-worker-0-twrlr invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 09:05:42.095 W ns/e2e-daemonsets-8157 pod/daemon-set-88gzd node/ostest-5xqm8-worker-0-twrlr container/app reason/NotReady
Sep 09 09:05:43.112 W ns/e2e-daemonsets-8157 pod/daemon-set-88gzd node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 09:05:43.194 I ns/e2e-daemonsets-8157 pod/daemon-set-2n8tx node/ reason/Created
Sep 09 09:05:43.213 I ns/e2e-daemonsets-8157 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-2n8tx
Sep 09 09:05:43.219 I ns/e2e-daemonsets-8157 pod/daemon-set-2n8tx node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 09:05:43.974 W ns/e2e-daemonsets-8157 pod/daemon-set-2n8tx node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 09:05:43.998 I ns/e2e-daemonsets-8157 daemonset/daemon-set reason/SuccessfulDelete Deleted pod: daemon-set-2n8tx
Sep 09 09:05:47.206 W ns/openshift-marketplace pod/community-operators-rq8jr node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 09:05:47.215 W ns/openshift-marketplace pod/redhat-marketplace-njpdl node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 09:05:47.256 I ns/openshift-marketplace pod/community-operators-rq8jr node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 09:05:47.270 I ns/openshift-marketplace pod/redhat-marketplace-njpdl node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 09:05:47.289 I ns/openshift-marketplace pod/redhat-marketplace-tg6lk node/ reason/Created
Sep 09 09:05:47.291 I ns/openshift-marketplace pod/community-operators-mrn2b node/ reason/Created
Sep 09 09:05:47.345 I ns/openshift-marketplace pod/redhat-marketplace-tg6lk node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 09:05:47.366 I ns/openshift-marketplace pod/community-operators-mrn2b node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 09:05:47.385 W ns/openshift-marketplace pod/redhat-operators-p4bkw node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 09:05:47.422 I ns/openshift-marketplace pod/redhat-operators-p4bkw node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 09:05:47.439 W ns/openshift-marketplace pod/certified-operators-bjf7t node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 0s
Sep 09 09:05:47.466 I ns/openshift-marketplace pod/redhat-operators-96wzk node/ reason/Created
Sep 09 09:05:47.501 I ns/openshift-marketplace pod/certified-operators-bjf7t node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Killing
Sep 09 09:05:47.562 I ns/openshift-marketplace pod/certified-operators-4klgc node/ reason/Created
Sep 09 09:05:47.644 I ns/openshift-marketplace pod/redhat-operators-96wzk node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 09:05:47.745 I ns/openshift-marketplace pod/certified-operators-4klgc node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 09:05:49.014 W ns/openshift-marketplace pod/redhat-marketplace-njpdl node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 09:05:49.030 W ns/openshift-marketplace pod/community-operators-rq8jr node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 09:05:49.085 W ns/openshift-marketplace pod/redhat-operators-p4bkw node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 09:05:49.250 W ns/openshift-marketplace pod/certified-operators-bjf7t node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 09:05:51.325 I ns/openshift-marketplace pod/community-operators-mrn2b reason/AddedInterface Add eth0 [10.128.3.137/23]
Sep 09 09:05:52.035 I ns/openshift-marketplace pod/community-operators-mrn2b node/ostest-5xqm8-worker-0-rzx47 container/registry-server reason/Pulling image/registry.redhat.io/redhat/community-operator-index:latest
Sep 09 09:05:52.378 W ns/e2e-daemonsets-8157 pod/daemon-set-2n8tx node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 09:05:52.419 I ns/e2e-daemonsets-8157 pod/daemon-set-k7gpc node/ reason/Created
Sep 09 09:05:52.427 I ns/e2e-daemonsets-8157 daemonset/daemon-set reason/SuccessfulCreate Created pod: daemon-set-k7gpc
Sep 09 09:05:52.461 I ns/e2e-daemonsets-8157 pod/daemon-set-k7gpc node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 09:05:53.060 I ns/openshift-marketplace pod/certified-operators-4klgc reason/AddedInterface Add eth0 [10.128.3.205/23]
Sep 09 09:05:53.232 I ns/openshift-marketplace pod/redhat-marketplace-tg6lk reason/AddedInterface Add eth0 [10.128.3.89/23]
Sep 09 09:05:53.291 I ns/openshift-marketplace pod/redhat-operators-96wzk reason/AddedInterface Add eth0 [10.128.3.14/23]
Sep 09 09:05:53.406 W ns/e2e-daemonsets-8157 pod/daemon-set-85s2z node/ostest-5xqm8-worker-0-rzx47 reason/GracefulDelete in 30s
Sep 09 09:05:53.439 W ns/e2e-daemonsets-8157 pod/daemon-set-2jmxk node/ostest-5xqm8-worker-0-cbbx9 reason/GracefulDelete in 30s
Sep 09 09:05:53.451 I ns/e2e-daemonsets-8157 pod/daemon-set-85s2z node/ostest-5xqm8-worker-0-rzx47 container/app reason/Killing
Sep 09 09:05:53.454 W ns/e2e-daemonsets-8157 pod/daemon-set-k7gpc node/ostest-5xqm8-worker-0-twrlr reason/GracefulDelete in 30s
Sep 09 09:05:53.558 I ns/e2e-daemonsets-8157 pod/daemon-set-2jmxk node/ostest-5xqm8-worker-0-cbbx9 container/app reason/Killing
Sep 09 09:05:54.213 I ns/openshift-marketplace pod/certified-operators-4klgc node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/certified-operator-index:v4.6
Sep 09 09:05:54.316 I ns/openshift-marketplace pod/redhat-operators-96wzk node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-operator-index:v4.6
Sep 09 09:05:54.362 I ns/openshift-marketplace pod/redhat-marketplace-tg6lk node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulling image/registry.redhat.io/redhat/redhat-marketplace-index:v4.6
Sep 09 09:05:54.795 I ns/openshift-marketplace pod/community-operators-mrn2b node/ostest-5xqm8-worker-0-rzx47 container/registry-server reason/Pulled image/registry.redhat.io/redhat/community-operator-index:latest
Sep 09 09:05:55.045 I ns/openshift-marketplace pod/community-operators-mrn2b node/ostest-5xqm8-worker-0-rzx47 container/registry-server reason/Created
Sep 09 09:05:55.140 I ns/openshift-marketplace pod/community-operators-mrn2b node/ostest-5xqm8-worker-0-rzx47 container/registry-server reason/Started
Sep 09 09:05:55.908 W ns/e2e-daemonsets-8157 pod/daemon-set-2jmxk node/ostest-5xqm8-worker-0-cbbx9 invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 09 09:05:55.908 W ns/e2e-daemonsets-8157 pod/daemon-set-2jmxk node/ostest-5xqm8-worker-0-cbbx9 container/app reason/NotReady
Sep 09 09:05:56.300 W ns/e2e-daemonsets-8157 pod/daemon-set-85s2z node/ostest-5xqm8-worker-0-rzx47 reason/Deleted
Sep 09 09:05:58.064 I ns/openshift-marketplace pod/certified-operators-4klgc node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/certified-operator-index:v4.6
Sep 09 09:05:58.363 I ns/openshift-marketplace pod/certified-operators-4klgc node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 09:05:58.459 I ns/openshift-marketplace pod/certified-operators-4klgc node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 09:05:58.659 I ns/openshift-marketplace pod/redhat-marketplace-tg6lk node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/redhat-marketplace-index:v4.6
Sep 09 09:05:59.009 I ns/openshift-marketplace pod/redhat-marketplace-tg6lk node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 09:05:59.073 I ns/openshift-marketplace pod/redhat-operators-96wzk node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Pulled image/registry.redhat.io/redhat/redhat-operator-index:v4.6
Sep 09 09:05:59.099 I ns/openshift-marketplace pod/redhat-marketplace-tg6lk node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 09:05:59.406 I ns/openshift-marketplace pod/redhat-operators-96wzk node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Created
Sep 09 09:05:59.474 I ns/openshift-marketplace pod/redhat-operators-96wzk node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Started
Sep 09 09:06:02.366 W ns/e2e-daemonsets-8157 pod/daemon-set-k7gpc node/ostest-5xqm8-worker-0-twrlr reason/Deleted
Sep 09 09:06:06.454 I ns/openshift-marketplace pod/redhat-marketplace-tg6lk node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 09:06:06.798 W ns/e2e-daemonsets-8157 pod/daemon-set-2jmxk node/ostest-5xqm8-worker-0-cbbx9 reason/Deleted
Sep 09 09:06:07.218 I ns/openshift-marketplace pod/community-operators-mrn2b node/ostest-5xqm8-worker-0-rzx47 container/registry-server reason/Ready
Sep 09 09:06:08.796 I ns/e2e-sched-pred-4852 pod/filler-pod-ba9c3d68-a923-404f-8e1d-0a6bfee2b8b2 node/ reason/Created
Sep 09 09:06:08.830 I ns/e2e-sched-pred-4852 pod/filler-pod-3bb3b548-2fd3-444c-a170-c0b9165857ae node/ reason/Created
Sep 09 09:06:08.857 I ns/e2e-sched-pred-4852 pod/filler-pod-ba9c3d68-a923-404f-8e1d-0a6bfee2b8b2 node/ostest-5xqm8-worker-0-cbbx9 reason/Scheduled
Sep 09 09:06:08.876 I ns/e2e-sched-pred-4852 pod/filler-pod-e573bd84-58f6-4e9e-8279-c4bff101f011 node/ reason/Created
Sep 09 09:06:08.888 I ns/e2e-sched-pred-4852 pod/filler-pod-3bb3b548-2fd3-444c-a170-c0b9165857ae node/ostest-5xqm8-worker-0-rzx47 reason/Scheduled
Sep 09 09:06:08.910 I ns/e2e-sched-pred-4852 pod/filler-pod-e573bd84-58f6-4e9e-8279-c4bff101f011 node/ostest-5xqm8-worker-0-twrlr reason/Scheduled
Sep 09 09:06:10.174 I ns/openshift-marketplace pod/redhat-operators-96wzk node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 09:06:12.392 I ns/openshift-marketplace pod/certified-operators-4klgc node/ostest-5xqm8-worker-0-cbbx9 container/registry-server reason/Ready
Sep 09 09:06:29.487 I ns/e2e-sched-pred-4852 pod/filler-pod-e573bd84-58f6-4e9e-8279-c4bff101f011 reason/AddedInterface Add eth0 [10.128.120.88/23]
Sep 09 09:06:29.943 I ns/e2e-sched-pred-4852 pod/filler-pod-ba9c3d68-a923-404f-8e1d-0a6bfee2b8b2 reason/AddedInterface Add eth0 [10.128.121.73/23]
Sep 09 09:06:30.012 I ns/e2e-sched-pred-4852 pod/filler-pod-3bb3b548-2fd3-444c-a170-c0b9165857ae reason/AddedInterface Add eth0 [10.128.120.206/23]
Sep 09 09:06:30.280 I ns/e2e-sched-pred-4852 pod/filler-pod-e573bd84-58f6-4e9e-8279-c4bff101f011 node/ostest-5xqm8-worker-0-twrlr container/filler-pod-e573bd84-58f6-4e9e-8279-c4bff101f011 reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 09:06:30.658 I ns/e2e-sched-pred-4852 pod/filler-pod-ba9c3d68-a923-404f-8e1d-0a6bfee2b8b2 node/ostest-5xqm8-worker-0-cbbx9 container/filler-pod-ba9c3d68-a923-404f-8e1d-0a6bfee2b8b2 reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 09:06:30.658 I ns/e2e-sched-pred-4852 pod/filler-pod-e573bd84-58f6-4e9e-8279-c4bff101f011 node/ostest-5xqm8-worker-0-twrlr container/filler-pod-e573bd84-58f6-4e9e-8279-c4bff101f011 reason/Created
Sep 09 09:06:30.672 I ns/e2e-sched-pred-4852 pod/filler-pod-3bb3b548-2fd3-444c-a170-c0b9165857ae node/ostest-5xqm8-worker-0-rzx47 container/filler-pod-3bb3b548-2fd3-444c-a170-c0b9165857ae reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 09 09:06:30.778 I ns/e2e-sched-pred-4852 pod/filler-pod-e573bd84-58f6-4e9e-8279-c4bff101f011 node/ostest-5xqm8-worker-0-twrlr container/filler-pod-e573bd84-58f6-4e9e-8279-c4bff101f011 reason/Started
Sep 09 09:06:30.960 I ns/e2e-sched-pred-4852 pod/filler-pod-ba9c3d68-a923-404f-8e1d-0a6bfee2b8b2 node/ostest-5xqm8-worker-0-cbbx9 container/filler-pod-ba9c3d68-a923-404f-8e1d-0a6bfee2b8b2 reason/Created
Sep 09 09:06:30.984 I ns/e2e-sched-pred-4852 pod/filler-pod-3bb3b548-2fd3-444c-a170-c0b9165857ae node/ostest-5xqm8-worker-0-rzx47 container/filler-pod-3bb3b548-2fd3-444c-a170-c0b9165857ae reason/Created
Sep 09 09:06:31.011 I ns/e2e-sched-pred-4852 pod/filler-pod-ba9c3d68-a923-404f-8e1d-0a6bfee2b8b2 node/ostest-5xqm8-worker-0-cbbx9 container/filler-pod-ba9c3d68-a923-404f-8e1d-0a6bfee2b8b2 reason/Started
Sep 09 09:06:31.030 I ns/e2e-sched-pred-4852 pod/filler-pod-3bb3b548-2fd3-444c-a170-c0b9165857ae node/ostest-5xqm8-worker-0-rzx47 container/filler-pod-3bb3b548-2fd3-444c-a170-c0b9165857ae reason/Started
Sep 09 09:06:31.051 I ns/e2e-sched-pred-4852 pod/filler-pod-ba9c3d68-a923-404f-8e1d-0a6bfee2b8b2 node/ostest-5xqm8-worker-0-cbbx9 container/filler-pod-ba9c3d68-a923-404f-8e1d-0a6bfee2b8b2 reason/Ready
Sep 09 09:06:31.529 I ns/e2e-sched-pred-4852 pod/filler-pod-e573bd84-58f6-4e9e-8279-c4bff101f011 node/ostest-5xqm8-worker-0-twrlr container/filler-pod-e573bd84-58f6-4e9e-8279-c4bff101f011 reason/Ready
Sep 09 09:06:31.590 I ns/e2e-sched-pred-4852 pod/filler-pod-3bb3b548-2fd3-444c-a170-c0b9165857ae node/ostest-5xqm8-worker-0-rzx47 container/filler-pod-3bb3b548-2fd3-444c-a170-c0b9165857ae reason/Ready
Sep 09 09:06:32.967 I ns/e2e-sched-pred-4852 pod/additional-pod node/ reason/Created
Sep 09 09:06:32.995 W ns/e2e-sched-pred-4852 pod/additional-pod reason/FailedScheduling 0/6 nodes are available: 6 Insufficient cpu.
Sep 09 09:06:33.032 W ns/e2e-sched-pred-4852 pod/additional-pod reason/FailedScheduling 0/6 nodes are available: 6 Insufficient cpu.
Sep 09 09:06:35.754 W ns/e2e-sched-pred-4852 pod/additional-pod reason/FailedScheduling 0/6 nodes are available: 6 Insufficient cpu.


Stderr