ansible-playbook 2.6.11 config file = /usr/share/ansible/openshift-ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible-playbook python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] Using /usr/share/ansible/openshift-ansible/ansible.cfg as config file Parsed /etc/ansible/hosts inventory source with ini plugin statically imported: /usr/share/ansible/openshift-ansible/roles/rhel_subscribe/tasks/satellite.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_config_facts.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/cluster_health.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_registry_facts.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/cluster_health.yml statically imported: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml statically imported: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/tasks/restart_hosts.yml statically imported: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/verify_cluster_health.yml statically imported: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/validate_etcd_conf.yml statically imported: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/verify_cluster_health.yml statically imported: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/verify_cluster_health.yml statically imported: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml statically imported: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/set_facts.yml statically imported: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/drop_etcdctl.yml statically imported: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/firewall.yml statically imported: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/verify_cluster_health.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/registry_auth.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/static_shim.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_scheduler.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_predicates.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_priorities.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/set_loopback_context.yml statically imported: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/tasks/restart_hosts.yml statically imported: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/tasks/restart_services.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/registry_auth.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/prepull.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/upgrade/rpm_upgrade.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/prepull_check.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/copy_image_to_ostree.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/upgrade/stop_services.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/upgrade/rpm_upgrade_install.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/selinux_container_cgroup.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/upgrade/config_changes.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/systemd_units.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/dnsmasq_install.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/dnsmasq/no-network-manager.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/dnsmasq/network-manager.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/dnsmasq.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/dnsmasq/network-manager.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/upgrade/restart.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/node_system_container.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/../tasks/node_system_container_install.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/config/configure-node-settings.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/configure-proxy-settings.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/journald.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_config_facts.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/cluster_health.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_registry_facts.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/cluster_health.yml statically imported: /usr/share/ansible/openshift-ansible/roles/ansible_service_broker/tasks/facts.yml statically imported: /usr/share/ansible/openshift-ansible/roles/ansible_service_broker/tasks/upgrade.yml statically imported: /usr/share/ansible/openshift-ansible/roles/ansible_service_broker/tasks/migrate.yml statically imported: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/tasks/restart_hosts.yml PLAYBOOK: upgrade_control_plane.yml ***************************************************************************************************************************************************************************************************************************************************************************************** 117 plays in playbooks/byo/openshift-cluster/upgrades/v3_11/upgrade_control_plane.yml PLAY [Initialization Checkpoint Start] ************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [Set install initialization 'In Progress'] ***************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/main.yml:11 Wednesday 09 January 2019 15:39:25 +0100 (0:00:00.082) 0:00:00.082 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_stats": { "aggregate": true, "data": { "installer_phase_initialize": { "playbook": "", "start": "20190109153925Z", "status": "In Progress", "title": "Initialization" } }, "per_host": false }, "changed": false } META: ran handlers META: ran handlers PLAY [Populate config host groups] ****************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [Load group name mapping variables] ************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:7 Wednesday 09 January 2019 15:39:25 +0100 (0:00:00.070) 0:00:00.152 ***** ok: [localhost] => { "ansible_facts": { "g_all_hosts": "{{ g_master_hosts | union(g_node_hosts) | union(g_etcd_hosts) | union(g_new_etcd_hosts) | union(g_lb_hosts) | union(g_nfs_hosts) | union(g_new_node_hosts)| union(g_new_master_hosts) | default([]) }}", "g_etcd_hosts": "{{ groups.etcd | default([]) }}", "g_glusterfs_hosts": "{{ groups.glusterfs | default([]) }}", "g_glusterfs_registry_hosts": "{{ groups.glusterfs_registry | default(g_glusterfs_hosts) }}", "g_lb_hosts": "{{ groups.lb | default([]) }}", "g_master_hosts": "{{ groups.masters | default([]) }}", "g_new_etcd_hosts": "{{ groups.new_etcd | default([]) }}", "g_new_master_hosts": "{{ groups.new_masters | default([]) }}", "g_new_node_hosts": "{{ groups.new_nodes | default([]) }}", "g_nfs_hosts": "{{ groups.nfs | default([]) }}", "g_node_hosts": "{{ groups.nodes | default([]) }}" }, "ansible_included_var_files": [ "/usr/share/ansible/openshift-ansible/playbooks/init/vars/cluster_hosts.yml" ], "changed": false } TASK [Evaluate groups - g_nfs_hosts is single host] ************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:10 Wednesday 09 January 2019 15:39:25 +0100 (0:00:00.035) 0:00:00.187 ***** skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Evaluate oo_all_hosts] ************************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:15 Wednesday 09 January 2019 15:39:25 +0100 (0:00:00.027) 0:00:00.215 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-infra01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-infra01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-infra01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-infra01.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-infra02.os.ad.scanplus.de ok: [localhost] => (item=sp-os-infra02.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-infra02.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-infra02.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node02.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node02.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node02.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node02.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node03.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node03.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node03.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node03.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node04.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node04.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node04.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node04.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node05.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node05.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node05.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node05.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node06.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node06.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node06.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node06.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node07.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node07.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node07.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node07.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node08.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node08.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node08.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node08.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node09.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node09.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node09.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node09.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node10.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node10.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node10.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node10.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node11.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node11.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node11.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node11.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node12.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node12.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node12.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node12.os.ad.scanplus.de" } TASK [Evaluate oo_masters] ************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:24 Wednesday 09 January 2019 15:39:26 +0100 (0:00:00.155) 0:00:00.370 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_masters" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } TASK [Evaluate oo_first_master] ********************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:33 Wednesday 09 January 2019 15:39:26 +0100 (0:00:00.049) 0:00:00.420 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => { "add_host": { "groups": [ "oo_first_master" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false } TASK [Evaluate oo_new_etcd_to_config] *************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:42 Wednesday 09 January 2019 15:39:26 +0100 (0:00:00.040) 0:00:00.461 ***** TASK [Evaluate oo_masters_to_config] **************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:51 Wednesday 09 January 2019 15:39:26 +0100 (0:00:00.028) 0:00:00.490 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_masters_to_config" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } TASK [Evaluate oo_etcd_to_config] ******************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:60 Wednesday 09 January 2019 15:39:26 +0100 (0:00:00.050) 0:00:00.540 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_etcd_to_config" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } TASK [Evaluate oo_first_etcd] *********************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:69 Wednesday 09 January 2019 15:39:26 +0100 (0:00:00.043) 0:00:00.584 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => { "add_host": { "groups": [ "oo_first_etcd" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false } TASK [Evaluate oo_etcd_hosts_to_upgrade] ************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:81 Wednesday 09 January 2019 15:39:26 +0100 (0:00:00.038) 0:00:00.623 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_etcd_hosts_to_upgrade" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } TASK [Evaluate oo_etcd_hosts_to_backup] ************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:88 Wednesday 09 January 2019 15:39:26 +0100 (0:00:00.041) 0:00:00.664 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_etcd_hosts_to_backup" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } TASK [Evaluate oo_nodes_to_config] ****************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:95 Wednesday 09 January 2019 15:39:26 +0100 (0:00:00.040) 0:00:00.705 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-infra01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-infra01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-infra01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-infra01.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-infra02.os.ad.scanplus.de ok: [localhost] => (item=sp-os-infra02.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-infra02.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-infra02.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node02.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node02.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node02.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node02.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node03.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node03.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node03.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node03.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node04.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node04.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node04.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node04.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node05.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node05.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node05.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node05.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node06.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node06.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node06.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node06.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node07.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node07.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node07.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node07.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node08.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node08.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node08.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node08.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node09.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node09.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node09.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node09.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node10.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node10.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node10.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node10.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node11.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node11.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node11.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node11.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node12.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node12.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node12.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node12.os.ad.scanplus.de" } TASK [Evaluate oo_lb_to_config] ********************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:104 Wednesday 09 January 2019 15:39:26 +0100 (0:00:00.128) 0:00:00.834 ***** TASK [Evaluate oo_nfs_to_config] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:113 Wednesday 09 January 2019 15:39:26 +0100 (0:00:00.023) 0:00:00.857 ***** TASK [Evaluate oo_glusterfs_to_config] ************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:122 Wednesday 09 January 2019 15:39:26 +0100 (0:00:00.022) 0:00:00.880 ***** TASK [Evaluate oo_etcd_to_migrate] ****************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:131 Wednesday 09 January 2019 15:39:26 +0100 (0:00:00.026) 0:00:00.907 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_etcd_to_migrate" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } META: ran handlers META: ran handlers PLAY [Ensure that all non-node hosts are accessible] ************************************************************************************************************************************************************************************************************************************************************************ TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:2 Wednesday 09 January 2019 15:39:26 +0100 (0:00:00.057) 0:00:00.964 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"module_setup": true, "ansible_veth771f6724": {"macaddress": "22:33:70:56:af:25", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth771f6724", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::2033:70ff:fe56:af25"}], "active": true, "speed": 10000}, "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LANG": "en_US.UTF-8", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/0", "SHLVL": "2", "SSH_CLIENT": "172.30.80.240 53276 22", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "PWD": "/root", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "root", "USER": "root", "MAIL": "/var/mail/root", "HOME": "/root", "SELINUX_LEVEL_REQUESTED": "", "XDG_SESSION_ID": "92", "_": "/usr/bin/python", "SSH_CONNECTION": "172.30.80.240 53276 172.30.80.240 22"}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "00:50:56:aa:34:92", "network": "172.30.80.0", "mtu": 1500, "broadcast": "172.30.80.255", "alias": "ens192", "netmask": "255.255.255.0", "address": "172.30.80.240", "interface": "ens192", "type": "ether", "gateway": "172.30.80.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-862.11.6.el7.x86_64", "quiet": true, "vconsole.font": "latarcyrheb-sun16", "rhgb": true, "rd.lvm.lv": "vg01/root", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/vg01-root", "vconsole.keymap": "de"}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_ovs_system": {"macaddress": "8a:5b:aa:a6:5a:15", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1500, "device": "ovs-system", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "422AFDB1-25A7-1A07-6E08-A455AF861E9A", "ansible_pkg_mgr": "yum", "ansible_distribution": "RedHat", "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:edc7324c21d6", "ansible_all_ipv6_addresses": ["fe80::dc2c:86ff:fe4d:ac4d", "fe80::89:84ff:fedf:8477", "fe80::6023:59ff:fe80:3306", "fe80::2033:70ff:fe56:af25", "fe80::c0c:88ff:fefa:1f8", "fe80::7a:7eff:feaf:fd47", "fe80::250:56ff:feaa:3492"], "ansible_uptime_seconds": 21748, "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_user_shell": "/bin/bash", "ansible_product_serial": "VMware-42 2a fd b1 25 a7 1a 07-6e 08 a4 55 af 86 1e 9a", "ansible_form_factor": "Other", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_distribution_version": "7.5", "ansible_local": {"openshift": {"node": {"proxy_mode": "iptables", "dns_ip": "172.30.80.240", "bootstrapped": true}, "builddefaults": {"config": {"BuildDefaults": {"configuration": {"apiVersion": "v1", "kind": "BuildDefaultsConfig", "env": [{"name": "HTTP_PROXY", "value": ""}, {"name": "HTTPS_PROXY", "value": ""}, {"name": "NO_PROXY", "value": ""}, {"name": "http_proxy", "value": ""}, {"name": "https_proxy", "value": ""}, {"name": "no_proxy", "value": ""}], "resources": {"requests": {}, "limits": {}}}}}}, "logging": {"elasticsearch": {"pvc": {}, "ops": {"pvc": {}}}}, "cloudprovider": {}, "master": {"admission_plugin_config": {"openshift.io/ImagePolicy": {"configuration": {"kind": "ImagePolicyConfig", "executionRules": [{"skipOnResolutionFailure": true, "onResources": [{"resource": "pods"}, {"resource": "builds"}], "reject": true, "name": "execution-denied", "matchImageAnnotations": [{"value": "true", "key": "images.openshift.io/deny-execution"}]}], "apiVersion": "v1"}}}, "named_certificates": [{"keyfile": "/etc/origin/master/named_certificates/cert.key", "certfile": "/etc/origin/master/named_certificates/cert.crt", "names": ["sp-os-master01.os.ad.scanplus.de"], "cafile": "/etc/origin/master/named_certificates/ca.crt"}], "manage_htpasswd": true, "api_port": "8443", "cluster_method": "native", "sdn_cluster_network_cidr": "172.18.0.0/17", "ha": false}, "common": {"system_images_registry": "registry.access.redhat.com", "etcd_runtime": "host", "is_etcd_system_container": false, "rolling_restart_mode": "services", "deployment_subtype": "basic", "is_master_system_container": false, "is_containerized": false, "is_node_system_container": false, "portal_net": "172.18.128.0/17", "generate_no_proxy_hosts": true, "is_openvswitch_system_container": false, "no_proxy_etcd_host_ips": "172.30.80.240", "deployment_type": "openshift-enterprise"}, "hosted": {"templates": {"kubeconfig": "/tmp/openshift-ansible-DNTbe3/admin.kubeconfig"}, "routers": [{"name": "router", "certificate": "{{ openshift_hosted_router_certificate | default({}) }}", "replicas": "{{ replicas | default(1) }}", "serviceaccount": "router", "namespace": "default", "stats_port": 1936, "edits": "{{ openshift_hosted_router_edits }}", "images": "{{ openshift_hosted_router_image | default(None) }}", "selector": "{{ openshift_hosted_router_selector | default(None) }}", "ports": ["80:80", "443:443"]}], "infra": {"selector": "region=infra"}, "registry": {"force": [false], "name": "docker-registry", "serviceaccount": "registry", "cert": {"expire": {"days": 730}}, "selector": "region=infra", "edits": [{"action": "put", "key": "spec.strategy.rollingParams", "value": {"maxUnavailable": "25%", "maxSurge": "25%", "updatePeriodSeconds": 1, "intervalSeconds": 1, "timeoutSeconds": 600}}], "env": {"vars": {}}, "volumes": [], "registryurl": "openshift3/ose-${component}:${version}", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}"}, "router": {"certificate": {"keyfile": "/etc/origin/master/openshift-router.key", "certfile": "/etc/origin/master/openshift-router.crt", "cafile": "/etc/origin/master/ca.crt"}, "create_certificate": true, "image": "openshift3/ose-${component}:${version}", "selector": "region=infra", "edits": [{"action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600}], "registryurl": "openshift3/ose-${component}:${version}", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}"}, "docker": {"registry": {"insecure": {"default": "{{ openshift_docker_hosted_registry_insecure | default(False) }}"}}}, "wfp": {"rc": {"phase": {"msg": "All items completed", "changed": true, "results": [{"_ansible_parsed": true, "stderr_lines": [], "cmd": ["oc", "get", "replicationcontroller", "router-1", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .metadata.annotations.openshift\\\\.io/deployment\\\\.phase }"], "end": "2018-01-31 14:15:11.698797", "_ansible_no_log": false, "stdout": "Complete", "_ansible_item_result": true, "changed": true, "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc get replicationcontroller router-1 --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath=\'{ .metadata.annotations.openshift\\\\.io/deployment\\\\.phase }\'", "removes": null, "creates": null, "chdir": null, "stdin": null}}, "start": "2018-01-31 14:15:11.498834", "attempts": 1, "item": [{"name": "router", "certificate": {"certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key", "cafile": "/etc/origin/master/ca.crt"}, "replicas": "2", "namespace": "default", "serviceaccount": "router", "stats_port": 1936, "edits": [{"action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600}], "images": "openshift3/ose-${component}:${version}", "selector": "region=infra", "ports": ["80:80", "443:443"]}, {"_ansible_parsed": true, "stderr_lines": [], "cmd": ["oc", "get", "deploymentconfig", "router", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .status.latestVersion }"], "end": "2018-01-31 14:15:11.096068", "_ansible_no_log": false, "stdout": "1", "_ansible_item_result": true, "changed": true, "item": {"name": "router", "certificate": {"keyfile": "/etc/origin/master/openshift-router.key", "certfile": "/etc/origin/master/openshift-router.crt", "cafile": "/etc/origin/master/ca.crt"}, "replicas": "2", "namespace": "default", "serviceaccount": "router", "selector": "region=infra", "edits": [{"action": "put", "value": 1, "key": "spec.strategy.rollingParams.intervalSeconds"}, {"action": "put", "value": 1, "key": "spec.strategy.rollingParams.updatePeriodSeconds"}, {"action": "put", "value": 21600, "key": "spec.strategy.activeDeadlineSeconds"}], "images": "openshift3/ose-${component}:${version}", "stats_port": 1936, "ports": ["80:80", "443:443"]}, "delta": "0:00:00.196315", "stderr": "", "rc": 0, "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": false, "_raw_params": "oc get deploymentconfig router --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath=\'{ .status.latestVersion }\'", "removes": null, "warn": true, "chdir": null, "stdin": null}}, "stdout_lines": ["1"], "start": "2018-01-31 14:15:10.899753", "_ansible_ignore_errors": null, "failed": false}], "rc": 0, "delta": "0:00:00.199963", "stdout_lines": ["Complete"], "failed_when_result": false, "stderr": "", "_ansible_ignore_errors": null, "failed": false}]}}}}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "buildoverrides": {"config": {"BuildOverrides": {"configuration": {"kind": "BuildOverridesConfig", "apiVersion": "v1"}}}}}}, "ansible_vxlan_sys_4789": {"macaddress": "62:23:59:80:33:06", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "off [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65000, "device": "vxlan_sys_4789", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::6023:59ff:fe80:3306"}], "active": true, "type": "ether"}, "ansible_processor_vcpus": 4, "ansible_docker0": {"macaddress": "02:42:93:a0:69:56", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "on", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": [], "id": "8000.024293a06956", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "active": false, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "2", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "3", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBqeHKrd3sLU6lkKU1O+Cz2zREmQW1G7+gIZOxrP9lsqapQtmQHbx+PITMf50yHXnfqLHdi3GwWRGXdelXmeDrk=", "ansible_mounts": [{"block_used": 1239074, "uuid": "e36965f9-43c4-4739-9fd6-48d5e91ae531", "size_total": 29478518784, "block_total": 7196904, "mount": "/", "block_available": 5957830, "size_available": 24403271680, "fstype": "ext4", "inode_total": 1839600, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-root", "inode_used": 59749, "block_size": 4096, "inode_available": 1779851}, {"block_used": 49685, "uuid": "3be19171-9fdb-4539-9387-6bdd0564873a", "size_total": 520794112, "block_total": 127147, "mount": "/boot", "block_available": 77462, "size_available": 317284352, "fstype": "xfs", "inode_total": 512000, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/sda1", "inode_used": 345, "block_size": 4096, "inode_available": 511655}, {"block_used": 3613495, "uuid": "00fdfc93-06a0-4049-a073-c5f715f53604", "size_total": 47179472896, "block_total": 11518426, "mount": "/var", "block_available": 7904931, "size_available": 32378597376, "fstype": "ext4", "inode_total": 2912208, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var", "inode_used": 3841, "block_size": 4096, "inode_available": 2908367}, {"block_used": 60930, "uuid": "448b53b9-3193-40d6-a9e4-8eea58184ff3", "size_total": 4061331456, "block_total": 991536, "mount": "/home", "block_available": 930606, "size_available": 3811762176, "fstype": "ext4", "inode_total": 256000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-home", "inode_used": 458, "block_size": 4096, "inode_available": 255542}, {"block_used": 541523, "uuid": "c0e5f752-3494-4b1a-97d4-85ac937e51de", "size_total": 17060331520, "block_total": 4165120, "mount": "/var/log", "block_available": 3623597, "size_available": 14842253312, "fstype": "xfs", "inode_total": 16670720, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/mapper/vg01-var_log", "inode_used": 402, "block_size": 4096, "inode_available": 16670318}, {"block_used": 1382377, "uuid": "2f1fc08b-3d16-4ec8-92dd-98b265268302", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker", "block_available": 6477080, "size_available": 26530119680, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota", "device": "/dev/mapper/vg02-docker", "inode_used": 140484, "block_size": 4096, "inode_available": 15586108}, {"block_used": 1382377, "uuid": "2f1fc08b-3d16-4ec8-92dd-98b265268302", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/containers", "block_available": 6477080, "size_available": 26530119680, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg02-docker", "inode_used": 140484, "block_size": 4096, "inode_available": 15586108}, {"block_used": 1382377, "uuid": "2f1fc08b-3d16-4ec8-92dd-98b265268302", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/overlay2", "block_available": 6477080, "size_available": 26530119680, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg02-docker", "inode_used": 140484, "block_size": 4096, "inode_available": 15586108}], "ansible_system_vendor": "VMware, Inc.", "ansible_swaptotal_mb": 0, "ansible_distribution_major_version": "7", "ansible_real_group_id": 0, "ansible_lsb": {"release": "7.5", "major_release": "7", "codename": "Maipo", "id": "RedHatEnterpriseServer", "description": "Red Hat Enterprise Linux Server release 7.5 (Maipo)"}, "ansible_br0": {"macaddress": "8e:a0:57:4a:95:4f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "br0", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDMSr/v4wsoC12s0AkIGQt+/RfTRgH6GiFI534EPSly5A+lJYyNTbjy1pdZYxbC8F9MGfwFgDsUKLIi3P+Q6ZQb4df+j4maiUQZhgTZEuR/WVP1FisLDbPMSgVCYytkwRJlxwn4oIn+232QmF36F8/RiGXk+BRYrkjJraikDMh9gXr+gM1QA+TlM3G0ZorKmDMi9lXfFcJ0Lbn4OAC3bs+xzK1I+J7kuKjmNZ2FJuQSjxnj6neaazOfBx8Yfj+PW1vlhjQK0E+DglLZ5mzPj29EZP0xYDFmFfJDKFAPk39f+JWsn4AEy6KEgEuWGvr8jx9MmgpT49Y5IDdz3fyi+k17", "ansible_user_gecos": "root", "ansible_ens192": {"macaddress": "00:50:56:aa:34:92", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "on", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:0b:00.0", "module": "vmxnet3", "mtu": 1500, "device": "ens192", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.30.80.255", "netmask": "255.255.255.0", "network": "172.30.80.0", "address": "172.30.80.240"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::250:56ff:feaa:3492"}], "active": true, "speed": 10000, "hw_timestamp_filters": []}, "ansible_processor_threads_per_core": 1, "ansible_system": "Linux", "ansible_all_ipv4_addresses": ["172.17.0.1", "172.18.2.1", "172.30.80.240"], "ansible_python_version": "2.7.5", "ansible_product_version": "None", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 15868, "used": 15606, "free": 262}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 10037, "free": 5831}}, "ansible_user_dir": "/root", "gather_subset": ["all"], "ansible_real_user_id": 0, "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["172.30.80.240"], "search": ["cluster.local", "os.ad.scanplus.de"]}, "ansible_effective_group_id": 0, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 15868, "ansible_device_links": {"masters": {"sdd": ["dm-3"], "sde1": ["dm-5"], "sda2": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "sdb": ["dm-4"], "sdc": ["dm-4"]}, "labels": {"dm-2": ["lv_home"], "dm-3": ["lv_var"], "dm-1": ["lv_root"]}, "ids": {"sde1": ["lvm-pv-uuid-YdhOzR-e5LE-fdaj-DXDE-EP7u-CeZC-NHtsYB"], "sdd": ["lvm-pv-uuid-CNFHT2-Ex2F-r10G-n7Bt-x5iY-axVM-Dqdf7I"], "sr0": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "sda2": ["lvm-pv-uuid-LBc99i-H4dV-cdlN-qAYO-2VHC-ycxB-Fp0y1V"], "sdb": ["lvm-pv-uuid-rvJEAe-EfLg-keE2-xWRJ-Utdk-91ZM-731dZ2"], "sdc": ["lvm-pv-uuid-5ZW7wy-lVkU-dcBk-GjjR-4i6Q-s757-2SJZax"], "dm-4": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "dm-5": ["dm-name-vg02-docker", "dm-uuid-LVM-s79Ums9AnkJZ4t0TzbZ5GPMqnNoZW9U23uj7cVXGmXcvivRDtsTtBVCEn3CcVCj5"], "dm-2": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "dm-3": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "dm-0": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "dm-1": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"]}, "uuids": {"sda1": ["3be19171-9fdb-4539-9387-6bdd0564873a"], "dm-4": ["c0e5f752-3494-4b1a-97d4-85ac937e51de"], "dm-5": ["2f1fc08b-3d16-4ec8-92dd-98b265268302"], "dm-2": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"], "dm-3": ["00fdfc93-06a0-4049-a073-c5f715f53604"], "dm-0": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"], "dm-1": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_memfree_mb": 262, "ansible_processor_count": 4, "ansible_hostname": "sp-os-master01", "ansible_tun0": {"macaddress": "02:7a:7e:af:fd:47", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "tun0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.18.3.255", "netmask": "255.255.254.0", "network": "172.18.2.0", "address": "172.18.2.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7a:7eff:feaf:fd47"}], "active": true, "type": "ether"}, "ansible_interfaces": ["docker0", "lo", "ovs-system", "veth771f6724", "vxlan_sys_4789", "veth4b33ae1d", "br0", "veth2b6ca660", "ens192", "tun0", "veth205b4885"], "ansible_machine_id": "d768f1f16c8043df9d09ccf8ab47a75c", "ansible_fqdn": "sp-os-master01.os.ad.scanplus.de", "ansible_user_gid": 0, "ansible_nodename": "sp-os-master01.os.ad.scanplus.de", "ansible_distribution_file_search_string": "Red Hat", "ansible_lvm": {"pvs": {"/dev/sde1": {"free_g": "0", "size_g": "30.00", "vg": "vg02"}, "/dev/sdd": {"free_g": "0", "size_g": "35.00", "vg": "vg01"}, "/dev/sdb": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sdc": {"free_g": "0", "size_g": "2.00", "vg": "vg01"}, "/dev/sda2": {"free_g": "0", "size_g": "49.50", "vg": "vg01"}}, "lvs": {"swap": {"size_g": "3.91", "vg": "vg01"}, "var_log": {"size_g": "15.90", "vg": "vg01"}, "var": {"size_g": "44.76", "vg": "vg01"}, "home": {"size_g": "3.91", "vg": "vg01"}, "docker": {"size_g": "30.00", "vg": "vg02"}, "root": {"size_g": "28.02", "vg": "vg01"}}, "vgs": {"vg01": {"free_g": "0", "size_g": "96.49", "num_lvs": "5", "num_pvs": "4"}, "vg02": {"free_g": "0", "size_g": "30.00", "num_lvs": "1", "num_pvs": "1"}}}, "ansible_veth205b4885": {"macaddress": "0e:0c:88:fa:01:f8", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth205b4885", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c0c:88ff:fefa:1f8"}], "active": true, "speed": 10000}, "ansible_domain": "os.ad.scanplus.de", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIPcvGXS2/J4HUQNo5sCUTGXgWa8djEAwKgiQed/K8MV3", "ansible_processor_cores": 1, "ansible_bios_version": "6.00", "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20190109T153927", "tz": "CET", "weeknumber": "01", "hour": "15", "year": "2019", "minute": "39", "tz_offset": "+0100", "month": "01", "epoch": "1547044767", "iso8601_micro": "2019-01-09T14:39:27.494869Z", "weekday": "Wednesday", "time": "15:39:27", "date": "2019-01-09", "iso8601": "2019-01-09T14:39:27Z", "day": "09", "iso8601_basic": "20190109T153927494733", "second": "27"}, "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_product_name": "VMware Virtual Platform", "ansible_veth2b6ca660": {"macaddress": "02:89:84:df:84:77", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth2b6ca660", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::89:84ff:fedf:8477"}], "active": true, "speed": 10000}, "ansible_devices": {"sdd": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "73400320", "links": {"masters": ["dm-3"], "labels": [], "ids": ["lvm-pv-uuid-CNFHT2-Ex2F-r10G-n7Bt-x5iY-axVM-Dqdf7I"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var"], "size": "35.00 GB"}, "sde": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sde1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-5"], "labels": [], "ids": ["lvm-pv-uuid-YdhOzR-e5LE-fdaj-DXDE-EP7u-CeZC-NHtsYB"], "uuids": []}, "sectors": "62912512", "start": "2048", "holders": ["vg02-docker"], "size": "30.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "NECVMWar", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: VMware SATA AHCI controller", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "VMware SATA CD00", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "104857600", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "labels": [], "ids": ["lvm-pv-uuid-LBc99i-H4dV-cdlN-qAYO-2VHC-ycxB-Fp0y1V"], "uuids": []}, "sectors": "103823360", "start": "1026048", "holders": ["vg01-swap", "vg01-root", "vg01-home", "vg01-var", "vg01-var_log"], "size": "49.51 GB"}, "sda1": {"sectorsize": 512, "uuid": "3be19171-9fdb-4539-9387-6bdd0564873a", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["3be19171-9fdb-4539-9387-6bdd0564873a"]}, "sectors": "1024000", "start": "2048", "holders": [], "size": "500.00 MB"}}, "holders": [], "size": "50.00 GB"}, "sdb": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-4"], "labels": [], "ids": ["lvm-pv-uuid-rvJEAe-EfLg-keE2-xWRJ-Utdk-91ZM-731dZ2"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "10.00 GB"}, "sdc": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "4194304", "links": {"masters": ["dm-4"], "labels": [], "ids": ["lvm-pv-uuid-5ZW7wy-lVkU-dcBk-GjjR-4i6Q-s757-2SJZax"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "2.00 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "33341440", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "uuids": ["c0e5f752-3494-4b1a-97d4-85ac937e51de"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "15.90 GB"}, "dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62906368", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg02-docker", "dm-uuid-LVM-s79Ums9AnkJZ4t0TzbZ5GPMqnNoZW9U23uj7cVXGmXcvivRDtsTtBVCEn3CcVCj5"], "uuids": ["2f1fc08b-3d16-4ec8-92dd-98b265268302"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "30.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8192000", "links": {"masters": [], "labels": ["lv_home"], "ids": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "uuids": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "3.91 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "93872128", "links": {"masters": [], "labels": ["lv_var"], "ids": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "uuids": ["00fdfc93-06a0-4049-a073-c5f715f53604"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "44.76 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8192000", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "uuids": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "3.91 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "58761216", "links": {"masters": [], "labels": ["lv_root"], "ids": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"], "uuids": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "28.02 GB"}}, "ansible_user_uid": 0, "ansible_veth4b33ae1d": {"macaddress": "de:2c:86:4d:ac:4d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth4b33ae1d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::dc2c:86ff:fe4d:ac4d"}], "active": true, "speed": 10000}, "ansible_bios_date": "09/21/2015", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"]}}\n', '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nUSE OF THIS COMPUTER SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS SYSTEM.\nUNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION.\nEVIDENCE OF UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE, CRIMINAL, OR OTHER ADVERSE ACTION.\nUSE OF THIS SYSTEM CONSTITUTES CONSENT TO MONITORING FOR THESE PURPOSES.\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n') ok: [sp-os-master01.os.ad.scanplus.de] META: ran handlers META: ran handlers META: ran handlers PLAY [Initialize basic host facts] ****************************************************************************************************************************************************************************************************************************************************************************************** META: noop TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:7 Wednesday 09 January 2019 15:39:28 +0100 (0:00:01.358) 0:00:02.322 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node04.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node03.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node05.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node07.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node06.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"module_setup": true, "ansible_distribution_version": "7.5", "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LANG": "en_US.UTF-8", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/0", "SHLVL": "2", "SSH_CLIENT": "172.30.80.240 42048 22", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "PWD": "/root", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "root", "USER": "root", "MAIL": "/var/mail/root", "HOME": "/root", "SELINUX_LEVEL_REQUESTED": "", "XDG_SESSION_ID": "19192", "_": "/usr/bin/python", "SSH_CONNECTION": "172.30.80.240 42048 172.30.80.241 22"}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "00:50:56:aa:09:fd", "network": "172.30.80.0", "mtu": 1500, "broadcast": "172.30.80.255", "alias": "ens192", "netmask": "255.255.255.0", "address": "172.30.80.241", "interface": "ens192", "type": "ether", "gateway": "172.30.80.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-862.11.6.el7.x86_64", "quiet": true, "vconsole.font": "latarcyrheb-sun16", "rhgb": true, "rd.lvm.lv": "vg01/root", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/vg01-root", "vconsole.keymap": "de"}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_ovs_system": {"macaddress": "3e:fd:f6:cd:f9:8d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1500, "device": "ovs-system", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "422A43A6-9C35-EF0E-B5B5-5B561C56E0A1", "ansible_pkg_mgr": "yum", "ansible_distribution": "RedHat", "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:905241d3554d", "ansible_all_ipv6_addresses": ["fe80::7052:15ff:fead:98b0", "fe80::18fe:fdff:fed1:6baf", "fe80::985f:fdff:febd:4e8e", "fe80::250:56ff:feaa:9fd"], "ansible_uptime_seconds": 5438091, "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_user_shell": "/bin/bash", "ansible_product_serial": "VMware-42 2a 43 a6 9c 35 ef 0e-b5 b5 5b 56 1c 56 e0 a1", "ansible_form_factor": "Other", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_local": {"openshift": {"node": {"labels": {"region": "infra", "zone": "RZ-LM07"}, "proxy_mode": "iptables", "dns_ip": "172.30.80.241", "bootstrapped": true}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "master": {}, "common": {"system_images_registry": "registry.access.redhat.com", "etcd_runtime": "host", "is_etcd_system_container": false, "deployment_subtype": "basic", "is_master_system_container": false, "is_containerized": false, "is_node_system_container": false, "portal_net": "172.18.128.0/17", "generate_no_proxy_hosts": true, "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise"}, "cloudprovider": {}}}, "ansible_vxlan_sys_4789": {"macaddress": "1a:fe:fd:d1:6b:af", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "off [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65000, "device": "vxlan_sys_4789", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::18fe:fdff:fed1:6baf"}], "active": true, "type": "ether"}, "ansible_processor_vcpus": 2, "ansible_docker0": {"macaddress": "02:42:86:28:ae:ab", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "on", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": [], "id": "8000.02428628aeab", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "active": false, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJcQpLOSq6cDS0gscuBQ70G/GpTc5xwchIw2FzAX3/etWIL8Zxw/2wp+1c9AQZuAqnXEBVjOtbJG9pUMOTd5CuE=", "ansible_mounts": [{"block_used": 885995, "uuid": "e36965f9-43c4-4739-9fd6-48d5e91ae531", "size_total": 29478518784, "block_total": 7196904, "mount": "/", "block_available": 6310909, "size_available": 25849483264, "fstype": "ext4", "inode_total": 1839600, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-root", "inode_used": 43912, "block_size": 4096, "inode_available": 1795688}, {"block_used": 46593, "uuid": "3be19171-9fdb-4539-9387-6bdd0564873a", "size_total": 520794112, "block_total": 127147, "mount": "/boot", "block_available": 80554, "size_available": 329949184, "fstype": "xfs", "inode_total": 512000, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/sda1", "inode_used": 344, "block_size": 4096, "inode_available": 511656}, {"block_used": 458399, "uuid": "00fdfc93-06a0-4049-a073-c5f715f53604", "size_total": 20753092608, "block_total": 5066673, "mount": "/var", "block_available": 4608274, "size_available": 18875490304, "fstype": "ext4", "inode_total": 1289808, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var", "inode_used": 3277, "block_size": 4096, "inode_available": 1286531}, {"block_used": 59318, "uuid": "448b53b9-3193-40d6-a9e4-8eea58184ff3", "size_total": 4061331456, "block_total": 991536, "mount": "/home", "block_available": 932218, "size_available": 3818364928, "fstype": "ext4", "inode_total": 256000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-home", "inode_used": 33, "block_size": 4096, "inode_available": 255967}, {"block_used": 543549, "uuid": "c0e5f752-3494-4b1a-97d4-85ac937e51de", "size_total": 14917042176, "block_total": 3641856, "mount": "/var/log", "block_available": 3098307, "size_available": 12690665472, "fstype": "xfs", "inode_total": 14577664, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/mapper/vg01-var_log", "inode_used": 244, "block_size": 4096, "inode_available": 14577420}, {"block_used": 2335695, "uuid": "583a1985-eb9b-44ed-a0d0-967e48928eec", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker", "block_available": 5523762, "size_available": 22625329152, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota", "device": "/dev/mapper/vg02-docker", "inode_used": 221169, "block_size": 4096, "inode_available": 15505423}, {"block_used": 2335695, "uuid": "583a1985-eb9b-44ed-a0d0-967e48928eec", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/containers", "block_available": 5523762, "size_available": 22625329152, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg02-docker", "inode_used": 221169, "block_size": 4096, "inode_available": 15505423}, {"block_used": 2335695, "uuid": "583a1985-eb9b-44ed-a0d0-967e48928eec", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/overlay2", "block_available": 5523762, "size_available": 22625329152, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg02-docker", "inode_used": 221169, "block_size": 4096, "inode_available": 15505423}, {"block_used": 566742, "uuid": "N/A", "size_total": 538869497856, "block_total": 2055624, "mount": "/var/lib/origin/openshift.local.volumes/pods/7ea03924-b7a5-11e8-8f0c-005056aa3492/volumes/kubernetes.io~nfs/pvregistry", "block_available": 1488882, "size_available": 390301483008, "fstype": "nfs4", "inode_total": 33423360, "options": "rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.30.80.241,local_lock=none,addr=172.30.80.251", "device": "172.30.80.251:/exports/registry", "inode_used": 144610, "block_size": 262144, "inode_available": 33278750}], "ansible_system_vendor": "VMware, Inc.", "ansible_swaptotal_mb": 0, "ansible_distribution_major_version": "7", "ansible_real_group_id": 0, "ansible_lsb": {"release": "7.5", "major_release": "7", "codename": "Maipo", "id": "RedHatEnterpriseServer", "description": "Red Hat Enterprise Linux Server release 7.5 (Maipo)"}, "ansible_br0": {"macaddress": "aa:61:e2:79:87:48", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "br0", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQCrJFjjSBCEFkw/Yz3LDVXMUG7mEPkZ7DQ7EUM8xZA4SZJp3ZGXUzEDCC+ZtzL1LIA8XFBBLAU3pbTcpWkJ5p0qt/VRy4lkYlAc7io1bba3NcyPkkWK5I+GHIP3ZC9JiUbhxwvzInRNr9JKUNGLCQzd2zNCONm/slldoqghrTEhHiBVedR4gPkevIyTwPmHgMd80lFJsIqiwGbcSfFAR/54wuhG2whjEs2paXoMrikq4Y6KkeVSaqXNWuXWOTjKRjjZAkofx3FP7KaDq/XIseuEY5r1AxMSOEqcHRay9C82QsaCldJlafBNSzxWmZBFOTiy3Ap3B6b/TzdYZUatWyZF", "ansible_user_gecos": "root", "ansible_ens192": {"macaddress": "00:50:56:aa:09:fd", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "on", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:0b:00.0", "module": "vmxnet3", "mtu": 1500, "device": "ens192", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.30.80.255", "netmask": "255.255.255.0", "network": "172.30.80.0", "address": "172.30.80.241"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::250:56ff:feaa:9fd"}], "active": true, "speed": 10000, "hw_timestamp_filters": []}, "ansible_processor_threads_per_core": 1, "ansible_system": "Linux", "ansible_all_ipv4_addresses": ["172.17.0.1", "172.18.8.1", "172.30.80.241"], "ansible_python_version": "2.7.5", "ansible_veth3201a940": {"macaddress": "72:52:15:ad:98:b0", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth3201a940", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7052:15ff:fead:98b0"}], "active": true, "speed": 10000}, "ansible_product_version": "None", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 5788, "used": 5671, "free": 117}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 978, "free": 4810}}, "ansible_user_dir": "/root", "gather_subset": ["all"], "ansible_real_user_id": 0, "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["172.30.80.241"], "search": ["cluster.local", "os.ad.scanplus.de"]}, "ansible_effective_group_id": 0, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 5788, "ansible_device_links": {"masters": {"sdd": ["dm-4"], "sdc1": ["dm-5"], "sda2": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "sdb": ["dm-3"]}, "labels": {"dm-2": ["lv_home"], "dm-3": ["lv_var"], "dm-1": ["lv_root"]}, "ids": {"sdc1": ["lvm-pv-uuid-BMExWa-5r1k-KUG7-3UeC-t88v-piV8-I9HBN7"], "sdd": ["lvm-pv-uuid-5TY2eg-M0oZ-wmB0-jrD1-O95e-g8tY-3GvQ4A"], "sr0": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "sda2": ["lvm-pv-uuid-LBc99i-H4dV-cdlN-qAYO-2VHC-ycxB-Fp0y1V"], "sdb": ["lvm-pv-uuid-1qzRnu-nuxY-xtyM-dXdu-vFFb-WWqL-G06voI"], "dm-4": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "dm-5": ["dm-name-vg02-docker", "dm-uuid-LVM-l5FYz9xpv01ECz5LaFvc6LMMJsCSkiVTXKbdYH4vgDJH5dc6NM2A4LdUB8gESrEU"], "dm-2": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "dm-3": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "dm-0": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "dm-1": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"]}, "uuids": {"sda1": ["3be19171-9fdb-4539-9387-6bdd0564873a"], "dm-4": ["c0e5f752-3494-4b1a-97d4-85ac937e51de"], "dm-5": ["583a1985-eb9b-44ed-a0d0-967e48928eec"], "dm-2": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"], "dm-3": ["00fdfc93-06a0-4049-a073-c5f715f53604"], "dm-0": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"], "dm-1": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_memfree_mb": 117, "ansible_processor_count": 2, "ansible_hostname": "sp-os-infra01", "ansible_tun0": {"macaddress": "9a:5f:fd:bd:4e:8e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "tun0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.18.9.255", "netmask": "255.255.254.0", "network": "172.18.8.0", "address": "172.18.8.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::985f:fdff:febd:4e8e"}], "active": true, "type": "ether"}, "ansible_interfaces": ["docker0", "lo", "ovs-system", "tun0", "vxlan_sys_4789", "br0", "ens192", "veth3201a940"], "ansible_machine_id": "d768f1f16c8043df9d09ccf8ab47a75c", "ansible_fqdn": "sp-os-infra01.os.ad.scanplus.de", "ansible_user_gid": 0, "ansible_nodename": "sp-os-infra01.os.ad.scanplus.de", "ansible_distribution_file_search_string": "Red Hat", "ansible_lvm": {"pvs": {"/dev/sdd": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sdb": {"free_g": "0", "size_g": "20.00", "vg": "vg01"}, "/dev/sda2": {"free_g": "0", "size_g": "49.50", "vg": "vg01"}, "/dev/sdc1": {"free_g": "0", "size_g": "30.00", "vg": "vg02"}}, "lvs": {"swap": {"size_g": "3.91", "vg": "vg01"}, "var_log": {"size_g": "13.90", "vg": "vg01"}, "var": {"size_g": "29.76", "vg": "vg01"}, "home": {"size_g": "3.91", "vg": "vg01"}, "docker": {"size_g": "30.00", "vg": "vg02"}, "root": {"size_g": "28.02", "vg": "vg01"}}, "vgs": {"vg01": {"free_g": "0", "size_g": "79.50", "num_lvs": "5", "num_pvs": "3"}, "vg02": {"free_g": "0", "size_g": "30.00", "num_lvs": "1", "num_pvs": "1"}}}, "ansible_domain": "os.ad.scanplus.de", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIGOmjnc6lGac1Z1aHcR54C6aFzQ37hTsxWvD6rzQtDPP", "ansible_processor_cores": 1, "ansible_bios_version": "6.00", "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20190109T153928", "tz": "CET", "weeknumber": "01", "hour": "15", "year": "2019", "minute": "39", "tz_offset": "+0100", "month": "01", "epoch": "1547044768", "iso8601_micro": "2019-01-09T14:39:28.564548Z", "weekday": "Wednesday", "time": "15:39:28", "date": "2019-01-09", "iso8601": "2019-01-09T14:39:28Z", "day": "09", "iso8601_basic": "20190109T153928564473", "second": "28"}, "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_product_name": "VMware Virtual Platform", "ansible_devices": {"sdd": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-4"], "labels": [], "ids": ["lvm-pv-uuid-5TY2eg-M0oZ-wmB0-jrD1-O95e-g8tY-3GvQ4A"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "10.00 GB"}, "sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "NECVMWar", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: VMware SATA AHCI controller", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "VMware SATA CD00", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "104857600", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "labels": [], "ids": ["lvm-pv-uuid-LBc99i-H4dV-cdlN-qAYO-2VHC-ycxB-Fp0y1V"], "uuids": []}, "sectors": "103823360", "start": "1026048", "holders": ["vg01-swap", "vg01-root", "vg01-home", "vg01-var", "vg01-var_log"], "size": "49.51 GB"}, "sda1": {"sectorsize": 512, "uuid": "3be19171-9fdb-4539-9387-6bdd0564873a", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["3be19171-9fdb-4539-9387-6bdd0564873a"]}, "sectors": "1024000", "start": "2048", "holders": [], "size": "500.00 MB"}}, "holders": [], "size": "50.00 GB"}, "sdb": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "41943040", "links": {"masters": ["dm-3"], "labels": [], "ids": ["lvm-pv-uuid-1qzRnu-nuxY-xtyM-dXdu-vFFb-WWqL-G06voI"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var"], "size": "20.00 GB"}, "sdc": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdc1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-5"], "labels": [], "ids": ["lvm-pv-uuid-BMExWa-5r1k-KUG7-3UeC-t88v-piV8-I9HBN7"], "uuids": []}, "sectors": "62912512", "start": "2048", "holders": ["vg02-docker"], "size": "30.00 GB"}}, "holders": [], "size": "30.00 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "29155328", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "uuids": ["c0e5f752-3494-4b1a-97d4-85ac937e51de"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "13.90 GB"}, "dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62906368", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg02-docker", "dm-uuid-LVM-l5FYz9xpv01ECz5LaFvc6LMMJsCSkiVTXKbdYH4vgDJH5dc6NM2A4LdUB8gESrEU"], "uuids": ["583a1985-eb9b-44ed-a0d0-967e48928eec"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "30.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8192000", "links": {"masters": [], "labels": ["lv_home"], "ids": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "uuids": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "3.91 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62414848", "links": {"masters": [], "labels": ["lv_var"], "ids": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "uuids": ["00fdfc93-06a0-4049-a073-c5f715f53604"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "29.76 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8192000", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "uuids": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "3.91 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "58761216", "links": {"masters": [], "labels": ["lv_root"], "ids": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"], "uuids": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "28.02 GB"}}, "ansible_user_uid": 0, "ansible_bios_date": "09/21/2015", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"]}}\n', '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nUSE OF THIS COMPUTER SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS SYSTEM.\nUNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION.\nEVIDENCE OF UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE, CRIMINAL, OR OTHER ADVERSE ACTION.\nUSE OF THIS SYSTEM CONSTITUTES CONSENT TO MONITORING FOR THESE PURPOSES.\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n') Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node08.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"module_setup": true, "ansible_distribution_version": "7.5", "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LANG": "en_US.UTF-8", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/0", "SHLVL": "2", "SSH_CLIENT": "172.30.80.240 52904 22", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "PWD": "/root", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "root", "USER": "root", "MAIL": "/var/mail/root", "HOME": "/root", "SELINUX_LEVEL_REQUESTED": "", "XDG_SESSION_ID": "35865", "_": "/usr/bin/python", "SSH_CONNECTION": "172.30.80.240 52904 172.30.80.242 22"}, "ansible_vethb95db0c4": {"macaddress": "42:43:88:c6:ac:de", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethb95db0c4", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4043:88ff:fec6:acde"}], "active": true, "speed": 10000}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "00:50:56:aa:47:64", "network": "172.30.80.0", "mtu": 1500, "broadcast": "172.30.80.255", "alias": "ens192", "netmask": "255.255.255.0", "address": "172.30.80.242", "interface": "ens192", "type": "ether", "gateway": "172.30.80.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-862.11.6.el7.x86_64", "quiet": true, "vconsole.font": "latarcyrheb-sun16", "rhgb": true, "rd.lvm.lv": "vg01/root", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/vg01-root", "vconsole.keymap": "de"}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_ovs_system": {"macaddress": "d2:9c:8c:25:94:42", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1500, "device": "ovs-system", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "422A8563-28D9-EAC6-14A3-7E67E42D0ECC", "ansible_pkg_mgr": "yum", "ansible_distribution": "RedHat", "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7785d3d4de59", "ansible_all_ipv6_addresses": ["fe80::4c41:62ff:fe3d:4d77", "fe80::dccf:beff:fe5a:8669", "fe80::dc3d:b0ff:feba:67a7", "fe80::250:56ff:feaa:4764", "fe80::4043:88ff:fec6:acde"], "ansible_uptime_seconds": 10164937, "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_user_shell": "/bin/bash", "ansible_product_serial": "VMware-42 2a 85 63 28 d9 ea c6-14 a3 7e 67 e4 2d 0e cc", "ansible_form_factor": "Other", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_local": {"openshift": {"node": {"labels": {"region": "infra", "zone": "RZ-LM07"}, "proxy_mode": "iptables", "dns_ip": "172.30.80.242", "bootstrapped": false}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "master": {}, "common": {"portal_net": "172.18.128.0/17", "etcd_runtime": "host", "is_etcd_system_container": false, "deployment_subtype": "basic", "is_master_system_container": false, "is_containerized": false, "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise"}, "cloudprovider": {}}}, "ansible_vxlan_sys_4789": {"macaddress": "de:cf:be:5a:86:69", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "off [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65000, "device": "vxlan_sys_4789", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::dccf:beff:fe5a:8669"}], "active": true, "type": "ether"}, "ansible_processor_vcpus": 2, "ansible_docker0": {"macaddress": "02:42:e0:34:1e:60", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "on", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": [], "id": "8000.0242e0341e60", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "active": false, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIF7+MYRdKhxfWHxHcKTca+GIWICLjtjUzaNZ4JtIkQiXFrgKGlipPEoCcGEbbLQCYxwxT+zNPvEE0Rq4g+CPp4=", "ansible_mounts": [{"block_used": 908202, "uuid": "e36965f9-43c4-4739-9fd6-48d5e91ae531", "size_total": 29478518784, "block_total": 7196904, "mount": "/", "block_available": 6288702, "size_available": 25758523392, "fstype": "ext4", "inode_total": 1839600, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-root", "inode_used": 43755, "block_size": 4096, "inode_available": 1795845}, {"block_used": 831177, "uuid": "00fdfc93-06a0-4049-a073-c5f715f53604", "size_total": 20753092608, "block_total": 5066673, "mount": "/var", "block_available": 4235496, "size_available": 17348591616, "fstype": "ext4", "inode_total": 1289808, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var", "inode_used": 3449, "block_size": 4096, "inode_available": 1286359}, {"block_used": 59319, "uuid": "448b53b9-3193-40d6-a9e4-8eea58184ff3", "size_total": 4061331456, "block_total": 991536, "mount": "/home", "block_available": 932217, "size_available": 3818360832, "fstype": "ext4", "inode_total": 256000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-home", "inode_used": 34, "block_size": 4096, "inode_available": 255966}, {"block_used": 280368, "uuid": "c0e5f752-3494-4b1a-97d4-85ac937e51de", "size_total": 4183818240, "block_total": 1021440, "mount": "/var/log", "block_available": 741072, "size_available": 3035430912, "fstype": "xfs", "inode_total": 4096000, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/mapper/vg01-var_log", "inode_used": 151, "block_size": 4096, "inode_available": 4095849}, {"block_used": 1862468, "uuid": "548e11f1-6ebe-40b0-aa94-f129f784e254", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker", "block_available": 5996989, "size_available": 24563666944, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota", "device": "/dev/mapper/vg02-docker", "inode_used": 168476, "block_size": 4096, "inode_available": 15558116}, {"block_used": 50915, "uuid": "3be19171-9fdb-4539-9387-6bdd0564873a", "size_total": 520794112, "block_total": 127147, "mount": "/boot", "block_available": 76232, "size_available": 312246272, "fstype": "xfs", "inode_total": 512000, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/sda1", "inode_used": 345, "block_size": 4096, "inode_available": 511655}, {"block_used": 1862468, "uuid": "548e11f1-6ebe-40b0-aa94-f129f784e254", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/containers", "block_available": 5996989, "size_available": 24563666944, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg02-docker", "inode_used": 168476, "block_size": 4096, "inode_available": 15558116}, {"block_used": 1862468, "uuid": "548e11f1-6ebe-40b0-aa94-f129f784e254", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/overlay2", "block_available": 5996989, "size_available": 24563666944, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg02-docker", "inode_used": 168476, "block_size": 4096, "inode_available": 15558116}], "ansible_system_vendor": "VMware, Inc.", "ansible_swaptotal_mb": 0, "ansible_distribution_major_version": "7", "ansible_real_group_id": 0, "ansible_lsb": {"release": "7.5", "major_release": "7", "codename": "Maipo", "id": "RedHatEnterpriseServer", "description": "Red Hat Enterprise Linux Server release 7.5 (Maipo)"}, "ansible_br0": {"macaddress": "9a:6a:ee:80:94:47", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "br0", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDchteJnebA6tExdxeOxsCrNOS0w4KR0h22McD7zeedRAx7EyhqkAAeJ/lhAXF6//S5DvWV3oL0BaEBPzdgo6Yf6PC6Cwjh9Mg9/HT8dGcWdie5BqsHCdeni9vFQ4Do8E3RlGvhdO6kZkMY/DHBPtFcpGp1qd5zs2gBcpwOIkBv4jdwr+naoMhQTxPApmKdzMQOn2OZEhmXpPPFNmIfW8rIiGXOglUulQn2Oasnt1R28uSwKPFnJIfZ9UqkxY3CaeSXs2Gqf5Ko7jjeVumGqyo1eQwDELulNbUgLgLD7FFJsmoqsl6JP6cLDFoe/N8wvB9olH08KMIWCF5bnsl/hTaF", "ansible_user_gecos": "root", "ansible_ens192": {"macaddress": "00:50:56:aa:47:64", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "on", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:0b:00.0", "promisc": false, "module": "vmxnet3", "mtu": 1500, "device": "ens192", "ipv4_secondaries": [{"broadcast": "global", "netmask": "255.255.255.255", "network": "172.30.80.245", "address": "172.30.80.245"}], "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.30.80.255", "netmask": "255.255.255.0", "network": "172.30.80.0", "address": "172.30.80.242"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::250:56ff:feaa:4764"}], "active": true, "speed": 10000, "hw_timestamp_filters": []}, "ansible_processor_threads_per_core": 1, "ansible_system": "Linux", "ansible_all_ipv4_addresses": ["172.17.0.1", "172.18.6.1", "172.30.80.242", "172.30.80.245"], "ansible_python_version": "2.7.5", "ansible_veth25f51aac": {"macaddress": "4e:41:62:3d:4d:77", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth25f51aac", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4c41:62ff:fe3d:4d77"}], "active": true, "speed": 10000}, "ansible_product_version": "None", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 5788, "used": 5645, "free": 143}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 2138, "free": 3650}}, "ansible_user_dir": "/root", "gather_subset": ["all"], "ansible_real_user_id": 0, "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["172.30.80.242"], "search": ["cluster.local", "os.ad.scanplus.de"]}, "ansible_effective_group_id": 0, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 5788, "ansible_device_links": {"masters": {"sdc1": ["dm-5"], "sda2": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "sdb": ["dm-3"]}, "labels": {"dm-2": ["lv_home"], "dm-3": ["lv_var"], "dm-1": ["lv_root"]}, "ids": {"sdc1": ["lvm-pv-uuid-j4CCDA-VbJr-H6PC-r1NW-KE3l-pEea-7ToHeU"], "sr0": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "sda2": ["lvm-pv-uuid-LBc99i-H4dV-cdlN-qAYO-2VHC-ycxB-Fp0y1V"], "sdb": ["lvm-pv-uuid-Vcu4SZ-cCqn-ddBB-5nWQ-JFHX-Aa6b-MegX1Y"], "dm-4": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "dm-5": ["dm-name-vg02-docker", "dm-uuid-LVM-T4zfwU0jHFv4gk5mPFCIdwsl7gVRPFPFmiS0yreXSRjAEfFPJKvVq7HXwOQMu8g3"], "dm-2": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "dm-3": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "dm-0": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "dm-1": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"]}, "uuids": {"sda1": ["3be19171-9fdb-4539-9387-6bdd0564873a"], "dm-4": ["c0e5f752-3494-4b1a-97d4-85ac937e51de"], "dm-5": ["548e11f1-6ebe-40b0-aa94-f129f784e254"], "dm-2": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"], "dm-3": ["00fdfc93-06a0-4049-a073-c5f715f53604"], "dm-0": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"], "dm-1": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_memfree_mb": 143, "ansible_processor_count": 2, "ansible_hostname": "sp-os-infra02", "ansible_tun0": {"macaddress": "de:3d:b0:ba:67:a7", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "tun0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.18.7.255", "netmask": "255.255.254.0", "network": "172.18.6.0", "address": "172.18.6.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::dc3d:b0ff:feba:67a7"}], "active": true, "type": "ether"}, "ansible_interfaces": ["vethb95db0c4", "docker0", "lo", "ovs-system", "tun0", "vxlan_sys_4789", "br0", "veth25f51aac", "ens192"], "ansible_machine_id": "d768f1f16c8043df9d09ccf8ab47a75c", "ansible_fqdn": "sp-os-infra02.os.ad.scanplus.de", "ansible_user_gid": 0, "ansible_nodename": "sp-os-infra02.os.ad.scanplus.de", "ansible_distribution_file_search_string": "Red Hat", "ansible_lvm": {"pvs": {"/dev/sdb": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sda2": {"free_g": "0", "size_g": "49.50", "vg": "vg01"}, "/dev/sdc1": {"free_g": "0", "size_g": "30.00", "vg": "vg02"}}, "lvs": {"swap": {"size_g": "3.91", "vg": "vg01"}, "var_log": {"size_g": "3.91", "vg": "vg01"}, "var": {"size_g": "19.76", "vg": "vg01"}, "home": {"size_g": "3.91", "vg": "vg01"}, "docker": {"size_g": "30.00", "vg": "vg02"}, "root": {"size_g": "28.02", "vg": "vg01"}}, "vgs": {"vg01": {"free_g": "0", "size_g": "59.50", "num_lvs": "5", "num_pvs": "2"}, "vg02": {"free_g": "0", "size_g": "30.00", "num_lvs": "1", "num_pvs": "1"}}}, "ansible_domain": "os.ad.scanplus.de", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIHIo2aiXJlQ+qVlwydUDwVNOWFtH3cbiz1wR7yKaMY6m", "ansible_processor_cores": 1, "ansible_bios_version": "6.00", "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20190109T153928", "tz": "CET", "weeknumber": "01", "hour": "15", "year": "2019", "minute": "39", "tz_offset": "+0100", "month": "01", "epoch": "1547044768", "iso8601_micro": "2019-01-09T14:39:28.567947Z", "weekday": "Wednesday", "time": "15:39:28", "date": "2019-01-09", "iso8601": "2019-01-09T14:39:28Z", "day": "09", "iso8601_basic": "20190109T153928567835", "second": "28"}, "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_product_name": "VMware Virtual Platform", "ansible_devices": {"sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "NECVMWar", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: VMware SATA AHCI controller", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "VMware SATA CD00", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "104857600", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "labels": [], "ids": ["lvm-pv-uuid-LBc99i-H4dV-cdlN-qAYO-2VHC-ycxB-Fp0y1V"], "uuids": []}, "sectors": "103823360", "start": "1026048", "holders": ["vg01-swap", "vg01-root", "vg01-home", "vg01-var", "vg01-var_log"], "size": "49.51 GB"}, "sda1": {"sectorsize": 512, "uuid": "3be19171-9fdb-4539-9387-6bdd0564873a", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["3be19171-9fdb-4539-9387-6bdd0564873a"]}, "sectors": "1024000", "start": "2048", "holders": [], "size": "500.00 MB"}}, "holders": [], "size": "50.00 GB"}, "sdb": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-3"], "labels": [], "ids": ["lvm-pv-uuid-Vcu4SZ-cCqn-ddBB-5nWQ-JFHX-Aa6b-MegX1Y"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var"], "size": "10.00 GB"}, "sdc": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdc1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-5"], "labels": [], "ids": ["lvm-pv-uuid-j4CCDA-VbJr-H6PC-r1NW-KE3l-pEea-7ToHeU"], "uuids": []}, "sectors": "62912512", "start": "2048", "holders": ["vg02-docker"], "size": "30.00 GB"}}, "holders": [], "size": "30.00 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8192000", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "uuids": ["c0e5f752-3494-4b1a-97d4-85ac937e51de"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "3.91 GB"}, "dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62906368", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg02-docker", "dm-uuid-LVM-T4zfwU0jHFv4gk5mPFCIdwsl7gVRPFPFmiS0yreXSRjAEfFPJKvVq7HXwOQMu8g3"], "uuids": ["548e11f1-6ebe-40b0-aa94-f129f784e254"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "30.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8192000", "links": {"masters": [], "labels": ["lv_home"], "ids": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "uuids": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "3.91 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "41443328", "links": {"masters": [], "labels": ["lv_var"], "ids": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "uuids": ["00fdfc93-06a0-4049-a073-c5f715f53604"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "19.76 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8192000", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "uuids": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "3.91 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "58761216", "links": {"masters": [], "labels": ["lv_root"], "ids": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"], "uuids": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "28.02 GB"}}, "ansible_user_uid": 0, "ansible_bios_date": "09/21/2015", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"]}}\n', '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nUSE OF THIS COMPUTER SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS SYSTEM.\nUNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION.\nEVIDENCE OF UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE, CRIMINAL, OR OTHER ADVERSE ACTION.\nUSE OF THIS SYSTEM CONSTITUTES CONSENT TO MONITORING FOR THESE PURPOSES.\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n') Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node11.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node09.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-infra01.os.ad.scanplus.de] Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node12.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node10.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-infra02.os.ad.scanplus.de] (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_veth602c6833": {"macaddress": "66:bd:11:28:92:f5", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth602c6833", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::64bd:11ff:fe28:92f5"}], "active": true, "speed": 10000}, "module_setup": true, "ansible_veth820bf312": {"macaddress": "5e:15:3f:aa:0e:a0", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth820bf312", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::5c15:3fff:feaa:ea0"}], "active": true, "speed": 10000}, "ansible_distribution_version": "7.5", "ansible_vethd7ecedb9": {"macaddress": "ba:5b:38:8d:00:86", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd7ecedb9", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b85b:38ff:fe8d:86"}], "active": true, "speed": 10000}, "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LANG": "en_US.UTF-8", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/0", "SHLVL": "2", "SSH_CLIENT": "172.30.80.240 36170 22", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "PWD": "/root", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "root", "USER": "root", "MAIL": "/var/mail/root", "HOME": "/root", "SELINUX_LEVEL_REQUESTED": "", "XDG_SESSION_ID": "12470", "_": "/usr/bin/python", "SSH_CONNECTION": "172.30.80.240 36170 172.30.80.233 22"}, "ansible_veth40827115": {"macaddress": "56:2a:2f:73:fd:14", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth40827115", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::542a:2fff:fe73:fd14"}], "active": true, "speed": 10000}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "00:50:56:aa:08:c6", "network": "172.30.80.0", "mtu": 1500, "broadcast": "172.30.80.255", "alias": "ens192", "netmask": "255.255.255.0", "address": "172.30.80.233", "interface": "ens192", "type": "ether", "gateway": "172.30.80.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-862.11.6.el7.x86_64", "quiet": true, "vconsole.font": "latarcyrheb-sun16", "rhgb": true, "rd.lvm.lv": "vg01/root", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/vg01-root", "vconsole.keymap": "de"}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_ovs_system": {"macaddress": "46:4f:7f:0e:b1:28", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1500, "device": "ovs-system", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "422AE278-B4A6-EC38-75F2-6D7816838230", "ansible_pkg_mgr": "yum", "ansible_distribution": "RedHat", "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7d8d3673a1df", "ansible_all_ipv6_addresses": ["fe80::84fa:4bff:fe1c:cab4", "fe80::2c02:93ff:fea9:299d", "fe80::b499:5dff:fe19:f1ef", "fe80::509c:13ff:fed4:200", "fe80::c8ab:f3ff:fec0:2bee", "fe80::386f:4aff:fee0:90ca", "fe80::8097:11ff:fe37:c4f1", "fe80::24a6:1bff:fe0c:51f5", "fe80::64bd:11ff:fe28:92f5", "fe80::247d:b8ff:feed:333e", "fe80::e864:9ff:fe64:f000", "fe80::98e7:8fff:fe2a:7021", "fe80::4831:d3ff:fea1:be6e", "fe80::d4e4:a1ff:fef1:b276", "fe80::ec71:2dff:fe04:aeed", "fe80::8094:62ff:fed1:b189", "fe80::549a:d3ff:fe1e:4e41", "fe80::4ccd:afff:fe3d:629f", "fe80::a494:f6ff:fe87:e62c", "fe80::e83b:33ff:fedc:3759", "fe80::7025:c3ff:fef0:3be3", "fe80::5c15:3fff:feaa:ea0", "fe80::b85b:38ff:fe8d:86", "fe80::3497:7bff:fe8b:68cb", "fe80::542a:2fff:fe73:fd14", "fe80::8ccf:eff:feab:208b", "fe80::44cd:82ff:fe02:3640", "fe80::68d7:c2ff:fe19:bf5e", "fe80::250:56ff:feaa:8c6", "fe80::f461:feff:fe13:4149", "fe80::48c7:feff:fea2:e10e", "fe80::4b:9aff:fe6b:c202", "fe80::3c9e:71ff:fe55:a968"], "ansible_veth37377637": {"macaddress": "4a:31:d3:a1:be:6e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth37377637", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4831:d3ff:fea1:be6e"}], "active": true, "speed": 10000}, "ansible_uptime_seconds": 3478413, "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_vethff58bec8": {"macaddress": "8e:cf:0e:ab:20:8b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethff58bec8", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8ccf:eff:feab:208b"}], "active": true, "speed": 10000}, "ansible_user_shell": "/bin/bash", "ansible_veth1f1533f4": {"macaddress": "36:97:7b:8b:68:cb", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth1f1533f4", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3497:7bff:fe8b:68cb"}], "active": true, "speed": 10000}, "ansible_veth12745395": {"macaddress": "26:7d:b8:ed:33:3e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth12745395", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::247d:b8ff:feed:333e"}], "active": true, "speed": 10000}, "ansible_product_serial": "VMware-42 2a e2 78 b4 a6 ec 38-75 f2 6d 78 16 83 82 30", "ansible_form_factor": "Other", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_local": {"openshift": {"node": {"labels": {"region": "primary", "zone": "RZ-LM07"}, "proxy_mode": "iptables", "dns_ip": "172.30.80.233", "bootstrapped": true}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "master": {}, "common": {"system_images_registry": "registry.access.redhat.com", "etcd_runtime": "host", "is_etcd_system_container": false, "deployment_subtype": "basic", "is_master_system_container": false, "is_containerized": false, "is_node_system_container": false, "portal_net": "172.18.128.0/17", "generate_no_proxy_hosts": true, "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise"}, "cloudprovider": {}}}, "ansible_vxlan_sys_4789": {"macaddress": "ee:71:2d:04:ae:ed", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "off [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65000, "device": "vxlan_sys_4789", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ec71:2dff:fe04:aeed"}], "active": true, "type": "ether"}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:64:bc:a9:2a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "on", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": [], "id": "8000.024264bca92a", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "active": false, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "2", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "3", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "4", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "5", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "6", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "7", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz"], "ansible_veth1648b835": {"macaddress": "46:cd:82:02:36:40", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth1648b835", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::44cd:82ff:fe02:3640"}], "active": true, "speed": 10000}, "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPCUTAq/u6IuCL9H86SHP+Y+XuKtNrKm+TrF8bdOJd2zF3wfD9/m9QAN51W4E2XfUiO1kLhH+6jE8QBwYrscazY=", "ansible_user_gid": 0, "ansible_system_vendor": "VMware, Inc.", "ansible_swaptotal_mb": 0, "ansible_veth69981a56": {"macaddress": "ea:64:09:64:f0:00", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth69981a56", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e864:9ff:fe64:f000"}], "active": true, "speed": 10000}, "ansible_vethc6df6285": {"macaddress": "4a:c7:fe:a2:e1:0e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc6df6285", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::48c7:feff:fea2:e10e"}], "active": true, "speed": 10000}, "ansible_distribution_major_version": "7", "ansible_real_group_id": 0, "ansible_lsb": {"release": "7.5", "major_release": "7", "codename": "Maipo", "id": "RedHatEnterpriseServer", "description": "Red Hat Enterprise Linux Server release 7.5 (Maipo)"}, "ansible_vethd6c17d5f": {"macaddress": "d6:e4:a1:f1:b2:76", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd6c17d5f", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d4e4:a1ff:fef1:b276"}], "active": true, "speed": 10000}, "ansible_tun0": {"macaddress": "ea:3b:33:dc:37:59", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "tun0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.18.13.255", "netmask": "255.255.254.0", "network": "172.18.12.0", "address": "172.18.12.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e83b:33ff:fedc:3759"}], "active": true, "type": "ether"}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDCDSiSFqR7nwmuBsKs6DIsJuhQrVnSGY5srE8/Tjd1llcHrr2uU8MqEC4RfHHMQw+P1ieD+ddkyDsU1yIshjcd7poeowiD5S5nGYuTtd1CvlFDrjP42l+s3Pweh5NfEuFCXSB6UbeUzyinFHUjYdeIQUc7f5xKet1CBrk9oyxD8SejtvcPsDwzJnbP1USZMlJS1BrRHtel17Pg1zGEvMXCv/ADmo1KFFOgDIxJxjbplhDjBGcZoNpwGQ8uTrrfjvSShLL9fHBzIoG8Fqe0l4orVHG9VGdKRS4N4TGT+9mQz7UgM3qW12J9Wn5Xuxg53wh0dFE5hJibk+n8RF9CB0uV", "ansible_user_gecos": "root", "ansible_ens192": {"macaddress": "00:50:56:aa:08:c6", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "on", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:0b:00.0", "module": "vmxnet3", "mtu": 1500, "device": "ens192", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.30.80.255", "netmask": "255.255.255.0", "network": "172.30.80.0", "address": "172.30.80.233"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::250:56ff:feaa:8c6"}], "active": true, "speed": 10000, "hw_timestamp_filters": []}, "ansible_processor_threads_per_core": 1, "ansible_system": "Linux", "ansible_all_ipv4_addresses": ["172.17.0.1", "172.18.12.1", "172.30.80.233"], "ansible_vethc71b1da1": {"macaddress": "52:9c:13:d4:02:00", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc71b1da1", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::509c:13ff:fed4:200"}], "active": true, "speed": 10000}, "ansible_vethc0225bb4": {"macaddress": "b6:99:5d:19:f1:ef", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc0225bb4", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b499:5dff:fe19:f1ef"}], "active": true, "speed": 10000}, "ansible_vethf94d59bc": {"macaddress": "82:94:62:d1:b1:89", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf94d59bc", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8094:62ff:fed1:b189"}], "active": true, "speed": 10000}, "ansible_product_version": "None", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 15868, "used": 15304, "free": 564}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 3751, "free": 12117}}, "ansible_user_dir": "/root", "ansible_veth35ac4045": {"macaddress": "56:9a:d3:1e:4e:41", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth35ac4045", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::549a:d3ff:fe1e:4e41"}], "active": true, "speed": 10000}, "ansible_vethd210d5aa": {"macaddress": "26:a6:1b:0c:51:f5", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd210d5aa", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::24a6:1bff:fe0c:51f5"}], "active": true, "speed": 10000}, "gather_subset": ["all"], "ansible_real_user_id": 0, "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["172.30.80.233"], "search": ["cluster.local", "os.ad.scanplus.de"]}, "ansible_effective_group_id": 0, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 15868, "ansible_device_links": {"masters": {"sdd": ["dm-4"], "sdc1": ["dm-5"], "sdb1": ["dm-3"], "sda2": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"]}, "labels": {"dm-2": ["lv_home"], "dm-3": ["lv_var"], "dm-1": ["lv_root"]}, "ids": {"sdc1": ["lvm-pv-uuid-YHozIT-aeXf-m5If-gaQS-FvYl-Nym0-0RzUSB"], "sdb1": ["lvm-pv-uuid-qCWS1x-73bH-B0no-VY37-gQqk-UpAE-eQGftL"], "sdd": ["lvm-pv-uuid-pm9Hyj-JJOq-3LHP-aDI1-AI27-QUvp-wnXbxu"], "sr0": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "sda2": ["lvm-pv-uuid-LBc99i-H4dV-cdlN-qAYO-2VHC-ycxB-Fp0y1V"], "dm-4": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "dm-5": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-jxCMK0Gin7m8wXZag3NDIerE9CqXrynXFxxfCmICEVCti3gq7jNrHW6cNZZPOYz5"], "dm-2": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "dm-3": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "dm-0": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "dm-1": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"]}, "uuids": {"sda1": ["3be19171-9fdb-4539-9387-6bdd0564873a"], "dm-4": ["c0e5f752-3494-4b1a-97d4-85ac937e51de"], "dm-5": ["13c284db-9fee-4935-8f99-c6801d29b906"], "dm-2": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"], "dm-3": ["00fdfc93-06a0-4049-a073-c5f715f53604"], "dm-0": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"], "dm-1": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_veth2fb12c2d": {"macaddress": "86:fa:4b:1c:ca:b4", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth2fb12c2d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::84fa:4bff:fe1c:cab4"}], "active": true, "speed": 10000}, "ansible_memfree_mb": 564, "ansible_processor_count": 8, "ansible_hostname": "sp-os-node03", "ansible_vetheeac4e78": {"macaddress": "3e:9e:71:55:a9:68", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetheeac4e78", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3c9e:71ff:fe55:a968"}], "active": true, "speed": 10000}, "ansible_vethb7951e67": {"macaddress": "a6:94:f6:87:e6:2c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethb7951e67", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a494:f6ff:fe87:e62c"}], "active": true, "speed": 10000}, "ansible_interfaces": ["vethc0225bb4", "veth35ac4045", "vethf94d59bc", "vethd210d5aa", "ovs-system", "tun0", "veth602c6833", "veth293d3c1a", "veth54498901", "veth2fb12c2d", "vethb7951e67", "lo", "vxlan_sys_4789", "veth8557eee9", "vethcd296683", "vethb267b283", "veth567b9fba", "veth1648b835", "vethc71b1da1", "docker0", "vethd7ecedb9", "veth50ccc370", "br0", "veth37377637", "veth1f1533f4", "vetha7b6f0a6", "veth12745395", "veth820bf312", "vethcacec0c5", "veth40827115", "vethf38a86ce", "veth69981a56", "vethd6c17d5f", "vethff58bec8", "ens192", "vethc6df6285", "vetheeac4e78"], "ansible_vethf38a86ce": {"macaddress": "72:25:c3:f0:3b:e3", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf38a86ce", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7025:c3ff:fef0:3be3"}], "active": true, "speed": 10000}, "ansible_machine_id": "d768f1f16c8043df9d09ccf8ab47a75c", "ansible_fqdn": "sp-os-node03.os.ad.scanplus.de", "ansible_mounts": [{"block_used": 912998, "uuid": "e36965f9-43c4-4739-9fd6-48d5e91ae531", "size_total": 29478518784, "block_total": 7196904, "mount": "/", "block_available": 6283906, "size_available": 25738878976, "fstype": "ext4", "inode_total": 1839600, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-root", "inode_used": 43777, "block_size": 4096, "inode_available": 1795823}, {"block_used": 50825, "uuid": "3be19171-9fdb-4539-9387-6bdd0564873a", "size_total": 520794112, "block_total": 127147, "mount": "/boot", "block_available": 76322, "size_available": 312614912, "fstype": "xfs", "inode_total": 512000, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/sda1", "inode_used": 345, "block_size": 4096, "inode_available": 511655}, {"block_used": 59318, "uuid": "448b53b9-3193-40d6-a9e4-8eea58184ff3", "size_total": 4061331456, "block_total": 991536, "mount": "/home", "block_available": 932218, "size_available": 3818364928, "fstype": "ext4", "inode_total": 256000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-home", "inode_used": 33, "block_size": 4096, "inode_available": 255967}, {"block_used": 459020, "uuid": "00fdfc93-06a0-4049-a073-c5f715f53604", "size_total": 20753092608, "block_total": 5066673, "mount": "/var", "block_available": 4607653, "size_available": 18872946688, "fstype": "ext4", "inode_total": 1289808, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var", "inode_used": 4758, "block_size": 4096, "inode_available": 1285050}, {"block_used": 5204103, "uuid": "13c284db-9fee-4935-8f99-c6801d29b906", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker", "block_available": 2655354, "size_available": 10876329984, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 507086, "block_size": 4096, "inode_available": 15219506}, {"block_used": 3033517, "uuid": "c0e5f752-3494-4b1a-97d4-85ac937e51de", "size_total": 14917042176, "block_total": 3641856, "mount": "/var/log", "block_available": 608339, "size_available": 2491756544, "fstype": "xfs", "inode_total": 9734832, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/mapper/vg01-var_log", "inode_used": 628, "block_size": 4096, "inode_available": 9734204}, {"block_used": 5204103, "uuid": "13c284db-9fee-4935-8f99-c6801d29b906", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/containers", "block_available": 2655354, "size_available": 10876329984, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 507086, "block_size": 4096, "inode_available": 15219506}, {"block_used": 5204103, "uuid": "13c284db-9fee-4935-8f99-c6801d29b906", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/overlay2", "block_available": 2655354, "size_available": 10876329984, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 507086, "block_size": 4096, "inode_available": 15219506}, {"block_used": 566742, "uuid": "N/A", "size_total": 538869497856, "block_total": 2055624, "mount": "/var/lib/origin/openshift.local.volumes/pods/e57cd568-1418-11e9-8e6a-005056aa3492/volumes/kubernetes.io~nfs/pv-scanplus-netbox-prod-static", "block_available": 1488882, "size_available": 390301483008, "fstype": "nfs4", "inode_total": 33423360, "options": "rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.30.80.233,local_lock=none,addr=172.30.80.251", "device": "172.30.80.251:/exports/netbox/prod/static", "inode_used": 144610, "block_size": 262144, "inode_available": 33278750}], "ansible_veth567b9fba": {"macaddress": "9a:e7:8f:2a:70:21", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth567b9fba", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::98e7:8fff:fe2a:7021"}], "active": true, "speed": 10000}, "ansible_veth50ccc370": {"macaddress": "2e:02:93:a9:29:9d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth50ccc370", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::2c02:93ff:fea9:299d"}], "active": true, "speed": 10000}, "ansible_nodename": "sp-os-node03.os.ad.scanplus.de", "ansible_distribution_file_search_string": "Red Hat", "ansible_lvm": {"pvs": {"/dev/sdd": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sdb1": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sda2": {"free_g": "0", "size_g": "49.50", "vg": "vg01"}, "/dev/sdc1": {"free_g": "0", "size_g": "30.00", "vg": "vg0-docker"}}, "lvs": {"swap": {"size_g": "3.91", "vg": "vg01"}, "dockerlv": {"size_g": "30.00", "vg": "vg0-docker"}, "var_log": {"size_g": "13.90", "vg": "vg01"}, "var": {"size_g": "19.76", "vg": "vg01"}, "home": {"size_g": "3.91", "vg": "vg01"}, "root": {"size_g": "28.02", "vg": "vg01"}}, "vgs": {"vg01": {"free_g": "0", "size_g": "69.50", "num_lvs": "5", "num_pvs": "3"}, "vg0-docker": {"free_g": "0", "size_g": "30.00", "num_lvs": "1", "num_pvs": "1"}}}, "ansible_vethcd296683": {"macaddress": "02:4b:9a:6b:c2:02", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethcd296683", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4b:9aff:fe6b:c202"}], "active": true, "speed": 10000}, "ansible_veth54498901": {"macaddress": "82:97:11:37:c4:f1", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth54498901", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8097:11ff:fe37:c4f1"}], "active": true, "speed": 10000}, "ansible_domain": "os.ad.scanplus.de", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIOvpD0PtagOw1TMRjbCKjUFpQqZkxEdb9Nc/fuADY2wk", "ansible_processor_cores": 1, "ansible_bios_version": "6.00", "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20190109T153928", "tz": "CET", "weeknumber": "01", "hour": "15", "year": "2019", "minute": "39", "tz_offset": "+0100", "month": "01", "epoch": "1547044768", "iso8601_micro": "2019-01-09T14:39:28.726779Z", "weekday": "Wednesday", "time": "15:39:28", "date": "2019-01-09", "iso8601": "2019-01-09T14:39:28Z", "day": "09", "iso8601_basic": "20190109T153928726675", "second": "28"}, "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_vethcacec0c5": {"macaddress": "3a:6f:4a:e0:90:ca", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethcacec0c5", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::386f:4aff:fee0:90ca"}], "active": true, "speed": 10000}, "ansible_product_name": "VMware Virtual Platform", "ansible_veth293d3c1a": {"macaddress": "ca:ab:f3:c0:2b:ee", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth293d3c1a", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c8ab:f3ff:fec0:2bee"}], "active": true, "speed": 10000}, "ansible_devices": {"sdd": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-4"], "labels": [], "ids": ["lvm-pv-uuid-pm9Hyj-JJOq-3LHP-aDI1-AI27-QUvp-wnXbxu"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "10.00 GB"}, "sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "NECVMWar", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: VMware SATA AHCI controller", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "VMware SATA CD00", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "104857600", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "labels": [], "ids": ["lvm-pv-uuid-LBc99i-H4dV-cdlN-qAYO-2VHC-ycxB-Fp0y1V"], "uuids": []}, "sectors": "103823360", "start": "1026048", "holders": ["vg01-swap", "vg01-root", "vg01-home", "vg01-var", "vg01-var_log"], "size": "49.51 GB"}, "sda1": {"sectorsize": 512, "uuid": "3be19171-9fdb-4539-9387-6bdd0564873a", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["3be19171-9fdb-4539-9387-6bdd0564873a"]}, "sectors": "1024000", "start": "2048", "holders": [], "size": "500.00 MB"}}, "holders": [], "size": "50.00 GB"}, "sdb": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdb1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-3"], "labels": [], "ids": ["lvm-pv-uuid-qCWS1x-73bH-B0no-VY37-gQqk-UpAE-eQGftL"], "uuids": []}, "sectors": "20971457", "start": "63", "holders": ["vg01-var"], "size": "10.00 GB"}}, "holders": [], "size": "10.00 GB"}, "sdc": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdc1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-5"], "labels": [], "ids": ["lvm-pv-uuid-YHozIT-aeXf-m5If-gaQS-FvYl-Nym0-0RzUSB"], "uuids": []}, "sectors": "62912512", "start": "2048", "holders": ["vg0--docker-dockerlv"], "size": "30.00 GB"}}, "holders": [], "size": "30.00 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "29155328", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "uuids": ["c0e5f752-3494-4b1a-97d4-85ac937e51de"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "13.90 GB"}, "dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62906368", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-jxCMK0Gin7m8wXZag3NDIerE9CqXrynXFxxfCmICEVCti3gq7jNrHW6cNZZPOYz5"], "uuids": ["13c284db-9fee-4935-8f99-c6801d29b906"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "30.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8192000", "links": {"masters": [], "labels": ["lv_home"], "ids": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "uuids": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "3.91 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "41443328", "links": {"masters": [], "labels": ["lv_var"], "ids": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "uuids": ["00fdfc93-06a0-4049-a073-c5f715f53604"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "19.76 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8192000", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "uuids": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "3.91 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "58761216", "links": {"masters": [], "labels": ["lv_root"], "ids": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"], "uuids": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "28.02 GB"}}, "ansible_user_uid": 0, "ansible_vetha7b6f0a6": {"macaddress": "4e:cd:af:3d:62:9f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha7b6f0a6", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4ccd:afff:fe3d:629f"}], "active": true, "speed": 10000}, "ansible_veth8557eee9": {"macaddress": "f6:61:fe:13:41:49", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth8557eee9", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f461:feff:fe13:4149"}], "active": true, "speed": 10000}, "ansible_vethb267b283": {"macaddress": "6a:d7:c2:19:bf:5e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethb267b283", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::68d7:c2ff:fe19:bf5e"}], "active": true, "speed": 10000}, "ansible_bios_date": "09/21/2015", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_python_version": "2.7.5", "ansible_br0": {"macaddress": "96:59:24:0e:85:4b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "br0", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}}}\n', '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nUSE OF THIS COMPUTER SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS SYSTEM.\nUNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION.\nEVIDENCE OF UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE, CRIMINAL, OR OTHER ADVERSE ACTION.\nUSE OF THIS SYSTEM CONSTITUTES CONSENT TO MONITORING FOR THESE PURPOSES.\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n') (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"module_setup": true, "ansible_veth25c3d22e": {"macaddress": "56:aa:c3:2c:73:f8", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth25c3d22e", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::54aa:c3ff:fe2c:73f8"}], "active": true, "speed": 10000}, "ansible_distribution_version": "7.5", "ansible_veth9eeda053": {"macaddress": "c6:0a:25:2b:24:e6", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth9eeda053", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c40a:25ff:fe2b:24e6"}], "active": true, "speed": 10000}, "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LANG": "en_US.UTF-8", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/0", "SHLVL": "2", "SSH_CLIENT": "172.30.80.240 57140 22", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "PWD": "/root", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "root", "USER": "root", "MAIL": "/var/mail/root", "HOME": "/root", "SELINUX_LEVEL_REQUESTED": "", "XDG_SESSION_ID": "35866", "_": "/usr/bin/python", "SSH_CONNECTION": "172.30.80.240 57140 172.30.80.244 22"}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "00:50:56:aa:49:a9", "network": "172.30.80.0", "mtu": 1500, "broadcast": "172.30.80.255", "alias": "ens192", "netmask": "255.255.255.0", "address": "172.30.80.244", "interface": "ens192", "type": "ether", "gateway": "172.30.80.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-862.11.6.el7.x86_64", "quiet": true, "vconsole.font": "latarcyrheb-sun16", "rhgb": true, "rd.lvm.lv": "vg01/root", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/vg01-root", "vconsole.keymap": "de"}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_ovs_system": {"macaddress": "ca:f5:70:ad:ed:93", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1500, "device": "ovs-system", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "422A8A5B-A1AA-8149-4125-3576F319A9BB", "ansible_pkg_mgr": "yum", "ansible_distribution": "RedHat", "ansible_veth48c12032": {"macaddress": "0a:b3:a9:11:aa:72", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth48c12032", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8b3:a9ff:fe11:aa72"}], "active": true, "speed": 10000}, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7a4c94131ac8", "ansible_vethdcbf1f12": {"macaddress": "26:11:51:62:b6:84", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethdcbf1f12", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::2411:51ff:fe62:b684"}], "active": true, "speed": 10000}, "ansible_all_ipv6_addresses": ["fe80::4098:43ff:fe54:5611", "fe80::ccf1:7ff:fea8:3c50", "fe80::14f2:8bff:fe3a:c04a", "fe80::7090:8bff:fec0:5de0", "fe80::704e:d1ff:feb2:aa8b", "fe80::54aa:c3ff:fe2c:73f8", "fe80::30ef:f1ff:fef1:b91f", "fe80::c494:47ff:feb6:23da", "fe80::88e9:8fff:fe4c:3e46", "fe80::8cfd:72ff:fe4e:a96d", "fe80::3401:52ff:fe48:b2eb", "fe80::7424:1eff:fee2:38ac", "fe80::c40a:25ff:fe2b:24e6", "fe80::68e5:64ff:feda:563c", "fe80::d458:a8ff:fe6b:d20b", "fe80::fcf8:ceff:fef0:f9c8", "fe80::b873:5aff:fe88:8946", "fe80::8897:fcff:fe4a:97a8", "fe80::8b3:a9ff:fe11:aa72", "fe80::c036:5bff:fe80:aa8b", "fe80::f094:f9ff:feef:3fca", "fe80::1c6e:91ff:febc:85e3", "fe80::648c:cbff:fed2:9b54", "fe80::5877:fcff:fec4:f754", "fe80::90c0:f6ff:febf:85ff", "fe80::80c8:6cff:fe34:ef5f", "fe80::2411:51ff:fe62:b684", "fe80::250:56ff:feaa:49a9", "fe80::c4e:8fff:fe85:b673", "fe80::852:60ff:fe60:f88b", "fe80::f045:b1ff:fec7:c702", "fe80::1813:12ff:febd:9655"], "ansible_vethd2868553": {"macaddress": "42:98:43:54:56:11", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd2868553", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4098:43ff:fe54:5611"}], "active": true, "speed": 10000}, "ansible_uptime_seconds": 10164932, "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "gather_subset": ["all"], "ansible_vethd791ae6a": {"macaddress": "82:c8:6c:34:ef:5f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd791ae6a", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::80c8:6cff:fe34:ef5f"}], "active": true, "speed": 10000}, "ansible_veth7f495d71": {"macaddress": "1e:6e:91:bc:85:e3", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth7f495d71", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1c6e:91ff:febc:85e3"}], "active": true, "speed": 10000}, "ansible_user_shell": "/bin/bash", "ansible_vethca4c0e80": {"macaddress": "36:01:52:48:b2:eb", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethca4c0e80", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3401:52ff:fe48:b2eb"}], "active": true, "speed": 10000}, "ansible_product_serial": "VMware-42 2a 8a 5b a1 aa 81 49-41 25 35 76 f3 19 a9 bb", "ansible_form_factor": "Other", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_veth89440829": {"macaddress": "66:8c:cb:d2:9b:54", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth89440829", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::648c:cbff:fed2:9b54"}], "active": true, "speed": 10000}, "ansible_veth69ed120d": {"macaddress": "32:ef:f1:f1:b9:1f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth69ed120d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::30ef:f1ff:fef1:b91f"}], "active": true, "speed": 10000}, "ansible_local": {"openshift": {"node": {"labels": {"region": "primary", "zone": "RZ-LM07"}, "proxy_mode": "iptables", "dns_ip": "172.30.80.244", "bootstrapped": false}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "master": {}, "common": {"portal_net": "172.18.128.0/17", "etcd_runtime": "host", "is_etcd_system_container": false, "deployment_subtype": "basic", "is_master_system_container": false, "is_containerized": false, "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise"}, "cloudprovider": {}}}, "ansible_veth68346b7c": {"macaddress": "16:f2:8b:3a:c0:4a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth68346b7c", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::14f2:8bff:fe3a:c04a"}], "active": true, "speed": 10000}, "ansible_vxlan_sys_4789": {"macaddress": "d6:58:a8:6b:d2:0b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "off [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65000, "device": "vxlan_sys_4789", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d458:a8ff:fe6b:d20b"}], "active": true, "type": "ether"}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:78:d1:0b:08", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "on", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": [], "id": "8000.024278d10b08", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "active": false, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "2", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "3", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "4", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "5", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "6", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "7", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz"], "ansible_veth5bae4014": {"macaddress": "8e:fd:72:4e:a9:6d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth5bae4014", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8cfd:72ff:fe4e:a96d"}], "active": true, "speed": 10000}, "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJkYDqr8bEafpzsDQhcWtEcYovsCQ1gbGbeUPrWKGDcYOgmUMyVjnus6Rq/Sod4efHxGixErpNx1fmj0JTgfBH0=", "ansible_veth83a776b1": {"macaddress": "6a:e5:64:da:56:3c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth83a776b1", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::68e5:64ff:feda:563c"}], "active": true, "speed": 10000}, "ansible_user_gid": 0, "ansible_system_vendor": "VMware, Inc.", "ansible_veth31b045b3": {"macaddress": "72:4e:d1:b2:aa:8b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth31b045b3", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::704e:d1ff:feb2:aa8b"}], "active": true, "speed": 10000}, "ansible_swaptotal_mb": 0, "ansible_distribution_major_version": "7", "ansible_real_group_id": 0, "ansible_lsb": {"release": "7.5", "major_release": "7", "codename": "Maipo", "id": "RedHatEnterpriseServer", "description": "Red Hat Enterprise Linux Server release 7.5 (Maipo)"}, "ansible_machine_id": "d768f1f16c8043df9d09ccf8ab47a75c", "ansible_tun0": {"macaddress": "f2:94:f9:ef:3f:ca", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "tun0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.18.1.255", "netmask": "255.255.254.0", "network": "172.18.0.0", "address": "172.18.0.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f094:f9ff:feef:3fca"}], "active": true, "type": "ether"}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQChqeyT4IuxdY1Zw9QWE9dDMx6bK274ykATvlzC+n4+YgHwdQK6/VVoZlhMNhkOTJjrKWX1dn0wC4+qeL4gIYiW0dM/YZA0LzqnOJeigXk+FvlAxgojIa4DnVdhgjxds/GTLSfQ732Ow6fgKoHteFw1mzTU6nwWOJx9AKeRyJBUK9yVz5xyvLagl1vJa8zr2l5O13KJ9JyXkOCi9sz7zcLXpcYB+hGBOqON4/SLkRp8dHjUpC+5Cm9y/UXToTUHzu63enlVsttRtHnInFwGk1rBB0qvXArbdWyT7055gwuui4LyTp/e9whC1lqdJlCWdA7njieuiIIcyTzG5QA2SyVd", "ansible_user_gecos": "root", "ansible_ens192": {"macaddress": "00:50:56:aa:49:a9", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "on", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:0b:00.0", "module": "vmxnet3", "mtu": 1500, "device": "ens192", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.30.80.255", "netmask": "255.255.255.0", "network": "172.30.80.0", "address": "172.30.80.244"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::250:56ff:feaa:49a9"}], "active": true, "speed": 10000, "hw_timestamp_filters": []}, "ansible_processor_threads_per_core": 1, "ansible_veth45914bed": {"macaddress": "8a:97:fc:4a:97:a8", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth45914bed", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8897:fcff:fe4a:97a8"}], "active": true, "speed": 10000}, "ansible_system": "Linux", "ansible_veth2655a259": {"macaddress": "5a:77:fc:c4:f7:54", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth2655a259", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::5877:fcff:fec4:f754"}], "active": true, "speed": 10000}, "ansible_all_ipv4_addresses": ["172.17.0.1", "172.18.0.1", "172.30.80.244"], "ansible_python_version": "2.7.5", "ansible_vethf03559c5": {"macaddress": "1a:13:12:bd:96:55", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf03559c5", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1813:12ff:febd:9655"}], "active": true, "speed": 10000}, "ansible_product_version": "None", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 15868, "used": 11941, "free": 3927}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 3666, "free": 12202}}, "ansible_user_dir": "/root", "ansible_vethf3d8a2ed": {"macaddress": "c6:94:47:b6:23:da", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf3d8a2ed", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c494:47ff:feb6:23da"}], "active": true, "speed": 10000}, "ansible_real_user_id": 0, "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["172.30.80.244"], "search": ["cluster.local", "os.ad.scanplus.de"]}, "ansible_effective_group_id": 0, "ansible_veth1925c5cb": {"macaddress": "fe:f8:ce:f0:f9:c8", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth1925c5cb", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fcf8:ceff:fef0:f9c8"}], "active": true, "speed": 10000}, "ansible_vethf1137571": {"macaddress": "8a:e9:8f:4c:3e:46", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf1137571", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::88e9:8fff:fe4c:3e46"}], "active": true, "speed": 10000}, "ansible_veth0fe6c2eb": {"macaddress": "0a:52:60:60:f8:8b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth0fe6c2eb", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::852:60ff:fe60:f88b"}], "active": true, "speed": 10000}, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 15868, "ansible_device_links": {"masters": {"sdd": ["dm-5"], "sdc1": ["dm-2"], "sda2": ["dm-0", "dm-1", "dm-3", "dm-4", "dm-5"], "sdb": ["dm-4"]}, "labels": {"dm-4": ["lv_var"], "dm-3": ["lv_home"], "dm-1": ["lv_root"]}, "ids": {"sdc1": ["lvm-pv-uuid-OUl4YW-DOvr-DABH-yd58-bdyF-IcrX-JxgDX1"], "sdd": ["lvm-pv-uuid-iBbjLw-MRFu-cK2H-QxXd-gACL-i3EM-UAEI3G"], "sr0": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "sda2": ["lvm-pv-uuid-LBc99i-H4dV-cdlN-qAYO-2VHC-ycxB-Fp0y1V"], "sdb": ["lvm-pv-uuid-3yf9Mh-mYpt-fjJM-rEes-nwyc-WOqP-jRmcnp"], "dm-4": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "dm-5": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "dm-2": ["dm-name-vg02-docker", "dm-uuid-LVM-6QGel69m04wdATCLV1GmvS2ZgdZzh5wROezNkZ0pNPdy83LZrUkWq32Q7vubjX3W"], "dm-3": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "dm-0": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "dm-1": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"]}, "uuids": {"sda1": ["3be19171-9fdb-4539-9387-6bdd0564873a"], "dm-4": ["00fdfc93-06a0-4049-a073-c5f715f53604"], "dm-5": ["c0e5f752-3494-4b1a-97d4-85ac937e51de"], "dm-2": ["f487239d-11b0-44c9-8375-7e87fcaf360e"], "dm-3": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"], "dm-0": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"], "dm-1": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_memfree_mb": 3927, "ansible_veth73f8ee14": {"macaddress": "c2:36:5b:80:aa:8b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth73f8ee14", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c036:5bff:fe80:aa8b"}], "active": true, "speed": 10000}, "ansible_processor_count": 8, "ansible_hostname": "sp-os-node02", "ansible_interfaces": ["vethe57de07a", "veth69ed120d", "ovs-system", "tun0", "veth25c3d22e", "veth9eeda053", "veth2655a259", "veth16db40ce", "vethc84be45e", "vethd2868553", "vethca4c0e80", "vxlan_sys_4789", "veth83a776b1", "veth68346b7c", "veth48c12032", "vethd791ae6a", "vethf3d8a2ed", "docker0", "veth36804846", "br0", "veth73f8ee14", "veth1925c5cb", "veth45914bed", "vethf03559c5", "ens192", "vetha5c2fea2", "veth7f495d71", "veth5bae4014", "veth31b045b3", "veth89440829", "vethdcbf1f12", "vethf1137571", "lo", "vethd81afadb", "veth0fe6c2eb", "veth75d003a2"], "ansible_veth36804846": {"macaddress": "ce:f1:07:a8:3c:50", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth36804846", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ccf1:7ff:fea8:3c50"}], "active": true, "speed": 10000}, "ansible_fqdn": "sp-os-node02.os.ad.scanplus.de", "ansible_mounts": [{"block_used": 910055, "uuid": "e36965f9-43c4-4739-9fd6-48d5e91ae531", "size_total": 29478518784, "block_total": 7196904, "mount": "/", "block_available": 6286849, "size_available": 25750933504, "fstype": "ext4", "inode_total": 1839600, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-root", "inode_used": 43757, "block_size": 4096, "inode_available": 1795843}, {"block_used": 617981, "uuid": "00fdfc93-06a0-4049-a073-c5f715f53604", "size_total": 20753092608, "block_total": 5066673, "mount": "/var", "block_available": 4448692, "size_available": 18221842432, "fstype": "ext4", "inode_total": 1289808, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var", "inode_used": 4323, "block_size": 4096, "inode_available": 1285485}, {"block_used": 4667266, "uuid": "f487239d-11b0-44c9-8375-7e87fcaf360e", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker", "block_available": 3192191, "size_available": 13075214336, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota", "device": "/dev/mapper/vg02-docker", "inode_used": 555208, "block_size": 4096, "inode_available": 15171384}, {"block_used": 1470152, "uuid": "c0e5f752-3494-4b1a-97d4-85ac937e51de", "size_total": 14917042176, "block_total": 3641856, "mount": "/var/log", "block_available": 2171704, "size_available": 8895299584, "fstype": "xfs", "inode_total": 14577664, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/mapper/vg01-var_log", "inode_used": 285, "block_size": 4096, "inode_available": 14577379}, {"block_used": 59319, "uuid": "448b53b9-3193-40d6-a9e4-8eea58184ff3", "size_total": 4061331456, "block_total": 991536, "mount": "/home", "block_available": 932217, "size_available": 3818360832, "fstype": "ext4", "inode_total": 256000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-home", "inode_used": 34, "block_size": 4096, "inode_available": 255966}, {"block_used": 50916, "uuid": "3be19171-9fdb-4539-9387-6bdd0564873a", "size_total": 520794112, "block_total": 127147, "mount": "/boot", "block_available": 76231, "size_available": 312242176, "fstype": "xfs", "inode_total": 512000, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/sda1", "inode_used": 345, "block_size": 4096, "inode_available": 511655}, {"block_used": 4667266, "uuid": "f487239d-11b0-44c9-8375-7e87fcaf360e", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/containers", "block_available": 3192191, "size_available": 13075214336, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg02-docker", "inode_used": 555208, "block_size": 4096, "inode_available": 15171384}, {"block_used": 4667266, "uuid": "f487239d-11b0-44c9-8375-7e87fcaf360e", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/overlay2", "block_available": 3192191, "size_available": 13075214336, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg02-docker", "inode_used": 555208, "block_size": 4096, "inode_available": 15171384}, {"block_used": 566742, "uuid": "N/A", "size_total": 538869497856, "block_total": 2055624, "mount": "/var/lib/origin/openshift.local.volumes/pods/a7466548-136e-11e9-b7d6-005056aa3492/volumes/kubernetes.io~nfs/pv-scanplus-netbox-dev-static", "block_available": 1488882, "size_available": 390301483008, "fstype": "nfs4", "inode_total": 33423360, "options": "rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.30.80.244,local_lock=none,addr=172.30.80.251", "device": "172.30.80.251:/exports/netbox/dev/static", "inode_used": 144610, "block_size": 262144, "inode_available": 33278750}], "ansible_vethe57de07a": {"macaddress": "72:90:8b:c0:5d:e0", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethe57de07a", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7090:8bff:fec0:5de0"}], "active": true, "speed": 10000}, "ansible_nodename": "sp-os-node02.os.ad.scanplus.de", "ansible_distribution_file_search_string": "Red Hat", "ansible_vethc84be45e": {"macaddress": "76:24:1e:e2:38:ac", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc84be45e", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7424:1eff:fee2:38ac"}], "active": true, "speed": 10000}, "ansible_lvm": {"pvs": {"/dev/sdd": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sdb": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sda2": {"free_g": "0", "size_g": "49.50", "vg": "vg01"}, "/dev/sdc1": {"free_g": "0", "size_g": "30.00", "vg": "vg02"}}, "lvs": {"swap": {"size_g": "3.91", "vg": "vg01"}, "var_log": {"size_g": "13.90", "vg": "vg01"}, "var": {"size_g": "19.76", "vg": "vg01"}, "home": {"size_g": "3.91", "vg": "vg01"}, "docker": {"size_g": "30.00", "vg": "vg02"}, "root": {"size_g": "28.02", "vg": "vg01"}}, "vgs": {"vg01": {"free_g": "0", "size_g": "69.50", "num_lvs": "5", "num_pvs": "3"}, "vg02": {"free_g": "0", "size_g": "30.00", "num_lvs": "1", "num_pvs": "1"}}}, "ansible_vetha5c2fea2": {"macaddress": "ba:73:5a:88:89:46", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha5c2fea2", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b873:5aff:fe88:8946"}], "active": true, "speed": 10000}, "ansible_domain": "os.ad.scanplus.de", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAINVgH9a9cMuqv3rLJoLoZqWvPB4uR1O72A6C1q2M+PaQ", "ansible_processor_cores": 1, "ansible_bios_version": "6.00", "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20190109T153928", "tz": "CET", "weeknumber": "01", "hour": "15", "year": "2019", "minute": "39", "tz_offset": "+0100", "month": "01", "epoch": "1547044768", "iso8601_micro": "2019-01-09T14:39:28.770213Z", "weekday": "Wednesday", "time": "15:39:28", "date": "2019-01-09", "iso8601": "2019-01-09T14:39:28Z", "day": "09", "iso8601_basic": "20190109T153928770111", "second": "28"}, "ansible_veth16db40ce": {"macaddress": "92:c0:f6:bf:85:ff", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth16db40ce", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::90c0:f6ff:febf:85ff"}], "active": true, "speed": 10000}, "ansible_veth75d003a2": {"macaddress": "0e:4e:8f:85:b6:73", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth75d003a2", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c4e:8fff:fe85:b673"}], "active": true, "speed": 10000}, "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_product_name": "VMware Virtual Platform", "ansible_devices": {"sdd": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-5"], "labels": [], "ids": ["lvm-pv-uuid-iBbjLw-MRFu-cK2H-QxXd-gACL-i3EM-UAEI3G"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "10.00 GB"}, "sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "NECVMWar", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: VMware SATA AHCI controller", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "VMware SATA CD00", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "104857600", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-3", "dm-4", "dm-5"], "labels": [], "ids": ["lvm-pv-uuid-LBc99i-H4dV-cdlN-qAYO-2VHC-ycxB-Fp0y1V"], "uuids": []}, "sectors": "103823360", "start": "1026048", "holders": ["vg01-swap", "vg01-root", "vg01-home", "vg01-var", "vg01-var_log"], "size": "49.51 GB"}, "sda1": {"sectorsize": 512, "uuid": "3be19171-9fdb-4539-9387-6bdd0564873a", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["3be19171-9fdb-4539-9387-6bdd0564873a"]}, "sectors": "1024000", "start": "2048", "holders": [], "size": "500.00 MB"}}, "holders": [], "size": "50.00 GB"}, "sdb": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-4"], "labels": [], "ids": ["lvm-pv-uuid-3yf9Mh-mYpt-fjJM-rEes-nwyc-WOqP-jRmcnp"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var"], "size": "10.00 GB"}, "sdc": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdc1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-2"], "labels": [], "ids": ["lvm-pv-uuid-OUl4YW-DOvr-DABH-yd58-bdyF-IcrX-JxgDX1"], "uuids": []}, "sectors": "62912512", "start": "2048", "holders": ["vg02-docker"], "size": "30.00 GB"}}, "holders": [], "size": "30.00 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "41443328", "links": {"masters": [], "labels": ["lv_var"], "ids": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "uuids": ["00fdfc93-06a0-4049-a073-c5f715f53604"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "19.76 GB"}, "dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "29155328", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "uuids": ["c0e5f752-3494-4b1a-97d4-85ac937e51de"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "13.90 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62906368", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg02-docker", "dm-uuid-LVM-6QGel69m04wdATCLV1GmvS2ZgdZzh5wROezNkZ0pNPdy83LZrUkWq32Q7vubjX3W"], "uuids": ["f487239d-11b0-44c9-8375-7e87fcaf360e"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "30.00 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8192000", "links": {"masters": [], "labels": ["lv_home"], "ids": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "uuids": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "3.91 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8192000", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "uuids": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "3.91 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "58761216", "links": {"masters": [], "labels": ["lv_root"], "ids": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"], "uuids": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "28.02 GB"}}, "ansible_user_uid": 0, "ansible_bios_date": "09/21/2015", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_vethd81afadb": {"macaddress": "f2:45:b1:c7:c7:02", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd81afadb", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f045:b1ff:fec7:c702"}], "active": true, "speed": 10000}, "ansible_br0": {"macaddress": "e6:90:b1:1e:6f:4c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "br0", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}}}\n', '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nUSE OF THIS COMPUTER SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS SYSTEM.\nUNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION.\nEVIDENCE OF UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE, CRIMINAL, OR OTHER ADVERSE ACTION.\nUSE OF THIS SYSTEM CONSTITUTES CONSENT TO MONITORING FOR THESE PURPOSES.\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n') (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_veth54cfa45a": {"macaddress": "7a:8c:24:b2:a7:25", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth54cfa45a", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::788c:24ff:feb2:a725"}], "active": true, "speed": 10000}, "ansible_vethfcc84e69": {"macaddress": "5e:77:a9:a2:ad:e6", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethfcc84e69", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::5c77:a9ff:fea2:ade6"}], "active": true, "speed": 10000}, "ansible_veth4088132d": {"macaddress": "da:24:9d:18:f2:8a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth4088132d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d824:9dff:fe18:f28a"}], "active": true, "speed": 10000}, "ansible_veth88a83adb": {"macaddress": "72:7a:e0:f3:65:e6", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth88a83adb", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::707a:e0ff:fef3:65e6"}], "active": true, "speed": 10000}, "module_setup": true, "ansible_distribution_version": "7.5", "ansible_vetha1df8081": {"macaddress": "62:bc:e6:f5:fc:b9", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha1df8081", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::60bc:e6ff:fef5:fcb9"}], "active": true, "speed": 10000}, "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LESSOPEN": "||/usr/bin/lesspipe.sh %s", "SSH_CLIENT": "172.30.80.240 48312 22", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "root", "USER": "root", "QTDIR": "/usr/lib64/qt-3.3", "PATH": "/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "LANG": "en_US.UTF-8", "QTLIB": "/usr/lib64/qt-3.3/lib", "SHELL": "/bin/bash", "QTINC": "/usr/lib64/qt-3.3/include", "HOME": "/root", "XDG_RUNTIME_DIR": "/run/user/0", "SELINUX_ROLE_REQUESTED": "", "QT_GRAPHICSSYSTEM_CHECKED": "1", "XDG_SESSION_ID": "52931", "_": "/usr/bin/python", "SELINUX_LEVEL_REQUESTED": "", "SHLVL": "2", "PWD": "/root", "MAIL": "/var/mail/root", "SSH_CONNECTION": "172.30.80.240 48312 172.30.81.88 22"}, "ansible_veth1f3e686e": {"macaddress": "4e:30:cb:aa:e6:73", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth1f3e686e", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4c30:cbff:feaa:e673"}], "active": true, "speed": 10000}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "00:50:56:aa:0f:a6", "network": "172.30.81.0", "mtu": 1500, "broadcast": "172.30.81.255", "alias": "ens192", "netmask": "255.255.255.0", "address": "172.30.81.88", "interface": "ens192", "type": "ether", "gateway": "172.30.81.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-862.11.6.el7.x86_64", "quiet": true, "vconsole.font": "latarcyrheb-sun16", "rhgb": true, "rd.lvm.lv": "vg01/root", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/vg01-root", "vconsole.keymap": "de"}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_veth3d9f792d": {"macaddress": "be:6f:33:f5:3e:09", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth3d9f792d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::bc6f:33ff:fef5:3e09"}], "active": true, "speed": 10000}, "ansible_ovs_system": {"macaddress": "02:8c:53:b9:96:db", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1500, "device": "ovs-system", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "422AF748-24CB-5955-94B8-40EC6727214E", "ansible_pkg_mgr": "yum", "ansible_veth21f4dfd4": {"macaddress": "ae:09:31:51:40:85", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth21f4dfd4", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ac09:31ff:fe51:4085"}], "active": true, "speed": 10000}, "ansible_distribution": "RedHat", "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:5d784a8ac668", "ansible_veth49514ce8": {"macaddress": "be:93:5a:41:1d:61", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth49514ce8", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::bc93:5aff:fe41:1d61"}], "active": true, "speed": 10000}, "ansible_all_ipv6_addresses": ["fe80::709d:4eff:febc:af6e", "fe80::bc93:5aff:fe41:1d61", "fe80::64ed:4bff:fe8d:c02a", "fe80::64be:96ff:fe6d:9d37", "fe80::707a:e0ff:fef3:65e6", "fe80::6432:57ff:fea0:1ebc", "fe80::fc37:96ff:feaf:3f0e", "fe80::5c77:a9ff:fea2:ade6", "fe80::c4c:56ff:fe73:d67", "fe80::3425:84ff:fec1:ce87", "fe80::d480:4aff:feeb:304b", "fe80::ac09:31ff:fe51:4085", "fe80::fc84:c3ff:fec9:b1f2", "fe80::3835:8dff:fe44:99d5", "fe80::382c:abff:fe07:2a6a", "fe80::82e:2bff:fe1e:99c9", "fe80::58cd:4cff:fe5a:2e22", "fe80::443d:54ff:fe8e:3d63", "fe80::1cd2:15ff:fe59:aee3", "fe80::b881:8eff:fea8:9ded", "fe80::58a0:47ff:fef8:752e", "fe80::b09b:a9ff:fe53:c9a8", "fe80::3c40:5eff:fe5a:591c", "fe80::68ee:48ff:fe39:65f7", "fe80::788c:24ff:feb2:a725", "fe80::38d6:56ff:fe72:834f", "fe80::302b:25ff:fe0a:5514", "fe80::ca2:f3ff:fed5:4891", "fe80::c8c:d0ff:fe86:1059", "fe80::6ca8:26ff:fe40:80a1", "fe80::d824:9dff:fe18:f28a", "fe80::c807:3eff:fee5:ca5f", "fe80::74bf:e7ff:fef3:156c", "fe80::bc6f:33ff:fef5:3e09", "fe80::250:56ff:feaa:fa6", "fe80::4c30:cbff:feaa:e673", "fe80::1c1b:32ff:feb3:8ff5", "fe80::e4ef:1fff:fed8:91f2", "fe80::58b5:e4ff:feb8:bccc", "fe80::3075:30ff:fe15:e112", "fe80::1c55:16ff:fede:6d7c", "fe80::60bc:e6ff:fef5:fcb9"], "ansible_uptime_seconds": 10165032, "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", "ansible_veth7139132f": {"macaddress": "e6:ef:1f:d8:91:f2", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth7139132f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e4ef:1fff:fed8:91f2"}], "active": true, "speed": 10000}, "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_user_shell": "/bin/bash", "ansible_vethe0c5395e": {"macaddress": "66:32:57:a0:1e:bc", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethe0c5395e", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::6432:57ff:fea0:1ebc"}], "active": true, "speed": 10000}, "ansible_vethec28b53c": {"macaddress": "32:75:30:15:e1:12", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethec28b53c", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3075:30ff:fe15:e112"}], "active": true, "speed": 10000}, "ansible_vethb51b5ffb": {"macaddress": "46:3d:54:8e:3d:63", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethb51b5ffb", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::443d:54ff:fe8e:3d63"}], "active": true, "speed": 10000}, "ansible_product_serial": "VMware-42 2a f7 48 24 cb 59 55-94 b8 40 ec 67 27 21 4e", "ansible_form_factor": "Other", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_local": {"openshift": {"node": {"schedulable": "false", "labels": {"nodeusage": "dev", "region": "primary", "zone": "RZ-LM07"}, "proxy_mode": "iptables", "dns_ip": "172.30.81.88", "bootstrapped": true}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "master": {}, "common": {"system_images_registry": "registry.access.redhat.com", "etcd_runtime": "host", "is_etcd_system_container": false, "deployment_subtype": "basic", "is_master_system_container": false, "is_containerized": false, "is_node_system_container": false, "portal_net": "172.18.128.0/17", "generate_no_proxy_hosts": true, "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise"}, "cloudprovider": {}}}, "ansible_vxlan_sys_4789": {"macaddress": "3a:2c:ab:07:2a:6a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "off [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65000, "device": "vxlan_sys_4789", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::382c:abff:fe07:2a6a"}], "active": true, "type": "ether"}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:81:ac:92:d0", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "on", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": [], "id": "8000.024281ac92d0", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "active": false, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "2", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "3", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "4", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "5", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "6", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "7", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDsaKvetOtFkVkungAc1H/1kTn//vyobnDamnjVyAuxVWSSNXfMoNzuZ0MqZmCsvhJ0SG1wvUNvWBYjnknqz0jA=", "ansible_vethc86165b3": {"macaddress": "36:25:84:c1:ce:87", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc86165b3", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3425:84ff:fec1:ce87"}], "active": true, "speed": 10000}, "ansible_user_gid": 0, "ansible_system_vendor": "VMware, Inc.", "ansible_veth59c3733b": {"macaddress": "ba:81:8e:a8:9d:ed", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth59c3733b", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b881:8eff:fea8:9ded"}], "active": true, "speed": 10000}, "ansible_swaptotal_mb": 0, "ansible_veth61b00009": {"macaddress": "5a:b5:e4:b8:bc:cc", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth61b00009", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::58b5:e4ff:feb8:bccc"}], "active": true, "speed": 10000}, "ansible_distribution_major_version": "7", "ansible_veth821f5921": {"macaddress": "0e:a2:f3:d5:48:91", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth821f5921", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ca2:f3ff:fed5:4891"}], "active": true, "speed": 10000}, "ansible_real_group_id": 0, "ansible_veth1d0db3a0": {"macaddress": "b2:9b:a9:53:c9:a8", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth1d0db3a0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b09b:a9ff:fe53:c9a8"}], "active": true, "speed": 10000}, "ansible_vethc3a15a13": {"macaddress": "66:be:96:6d:9d:37", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc3a15a13", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::64be:96ff:fe6d:9d37"}], "active": true, "speed": 10000}, "ansible_vetha86a42b5": {"macaddress": "3a:35:8d:44:99:d5", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha86a42b5", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3835:8dff:fe44:99d5"}], "active": true, "speed": 10000}, "ansible_tun0": {"macaddress": "3a:d6:56:72:83:4f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "tun0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.18.15.255", "netmask": "255.255.254.0", "network": "172.18.14.0", "address": "172.18.14.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::38d6:56ff:fe72:834f"}], "active": true, "type": "ether"}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDK/CHFWgPQi3JXASp8h3GvUbrz0DBTX+t3TOqRSmrsxj+NisJ/lar6gGsThcmYKMP5aylTj+UCWdj1+W6Lztih30B4I/U4tmT6zPpAsbzqyAFA3qkF1ZD58pL+GzlVan4LVzFktexoRqVo+NbNpn1+ecjCEoYwRPPOgLbUmMc2Fed6OtEX6Vir8v2d0CM62YfZei7bUN6a99xfWroOWF8Qa5d6feEXWUePKlMIopG5akly/Fg/6Pvpgy2bPfVZYYtJ87bwtbz04Vf43Zte9ak+Z9vx2eiBk0tbKh3DJZtf/hMlDsBF2UEedDQY6T341E5mq2IZffUEsERdMPiqTSIt", "ansible_user_gecos": "root", "ansible_ens192": {"macaddress": "00:50:56:aa:0f:a6", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "on", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:0b:00.0", "module": "vmxnet3", "mtu": 1500, "device": "ens192", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.30.81.255", "netmask": "255.255.255.0", "network": "172.30.81.0", "address": "172.30.81.88"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::250:56ff:feaa:fa6"}], "active": true, "speed": 10000, "hw_timestamp_filters": []}, "ansible_processor_threads_per_core": 1, "ansible_vethba493de9": {"macaddress": "1e:1b:32:b3:8f:f5", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethba493de9", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1c1b:32ff:feb3:8ff5"}], "active": true, "speed": 10000}, "ansible_system": "Linux", "ansible_all_ipv4_addresses": ["172.17.0.1", "172.18.14.1", "172.30.81.88"], "ansible_python_version": "2.7.5", "ansible_veth8bb71793": {"macaddress": "72:9d:4e:bc:af:6e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth8bb71793", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::709d:4eff:febc:af6e"}], "active": true, "speed": 10000}, "ansible_product_version": "None", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 15868, "used": 15604, "free": 264}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 8414, "free": 7454}}, "ansible_user_dir": "/root", "gather_subset": ["all"], "ansible_real_user_id": 0, "ansible_virtualization_role": "guest", "ansible_vethd1f97d06": {"macaddress": "6e:a8:26:40:80:a1", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd1f97d06", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::6ca8:26ff:fe40:80a1"}], "active": true, "speed": 10000}, "ansible_dns": {"nameservers": ["172.30.81.88"], "search": ["cluster.local", "os.ad.scanplus.de"]}, "ansible_effective_group_id": 0, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 15868, "ansible_device_links": {"masters": {"sdb1": ["dm-5"], "sda2": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "sdc": ["dm-4"]}, "labels": {"dm-2": ["lv_home"], "dm-3": ["lv_var"], "dm-1": ["lv_root"]}, "ids": {"sdb1": ["lvm-pv-uuid-LpeoGP-QtL3-fI2m-0HYT-CA61-G4b1-H1QDOb"], "sr0": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "sda2": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "sdc": ["lvm-pv-uuid-hXY0sc-EOgd-bCel-PkRW-pFfk-7YtJ-ittbTA"], "dm-4": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "dm-5": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-RDZWYzSM5xsHCbccZyRXFjRVQQqwhBn53R3kPu0OYnkcwd8mwqCs7aEm6qf9hVkn"], "dm-2": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "dm-3": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "dm-0": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "dm-1": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"]}, "uuids": {"sda1": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"], "dm-4": ["fdb049bd-4891-444b-8645-be3db36d3a8c"], "dm-5": ["32497aec-4047-47a6-98c7-23ff26ff981e"], "dm-2": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"], "dm-3": ["00fdfc93-06a0-4049-a073-c5f715f53604"], "dm-0": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"], "dm-1": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_veth6c3d2af5": {"macaddress": "32:2b:25:0a:55:14", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth6c3d2af5", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::302b:25ff:fe0a:5514"}], "active": true, "speed": 10000}, "ansible_vethebf107b1": {"macaddress": "1e:d2:15:59:ae:e3", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethebf107b1", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1cd2:15ff:fe59:aee3"}], "active": true, "speed": 10000}, "ansible_veth9612d3f0": {"macaddress": "3e:40:5e:5a:59:1c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth9612d3f0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3c40:5eff:fe5a:591c"}], "active": true, "speed": 10000}, "ansible_veth403f769c": {"macaddress": "ca:07:3e:e5:ca:5f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth403f769c", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c807:3eff:fee5:ca5f"}], "active": true, "speed": 10000}, "ansible_memfree_mb": 264, "ansible_veth8bfc269f": {"macaddress": "66:ed:4b:8d:c0:2a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth8bfc269f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::64ed:4bff:fe8d:c02a"}], "active": true, "speed": 10000}, "ansible_processor_count": 8, "ansible_hostname": "sp-os-node05", "ansible_interfaces": ["vethec28b53c", "vethba493de9", "vethd38a8b6a", "vetha86a42b5", "ovs-system", "tun0", "veth8bb71793", "vethc3a15a13", "vetha1df8081", "veth96c0d962", "vethd510d7ef", "vethebf107b1", "veth54cfa45a", "veth21a1b22a", "veth3d9f792d", "veth936fd0f7", "vethe0c5395e", "vethf106f1c5", "vethc86165b3", "vxlan_sys_4789", "veth61b00009", "veth8bfc269f", "veth49514ce8", "veth88a83adb", "vethd1f97d06", "docker0", "veth821f5921", "veth1f3e686e", "vethb51b5ffb", "veth9612d3f0", "veth22b20693", "veth21f4dfd4", "veth6c3d2af5", "veth59c3733b", "br0", "veth80b7e385", "veth403f769c", "veth4088132d", "veth1d0db3a0", "veth7139132f", "vethfcc84e69", "veth0c918137", "lo", "veth9629eee4", "ens192", "vethb341ff22"], "ansible_machine_id": "d768f1f16c8043df9d09ccf8ab47a75c", "ansible_veth936fd0f7": {"macaddress": "5a:a0:47:f8:75:2e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth936fd0f7", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::58a0:47ff:fef8:752e"}], "active": true, "speed": 10000}, "ansible_fqdn": "sp-os-node05.os.ad.scanplus.de", "ansible_mounts": [{"block_used": 726432, "uuid": "e36965f9-43c4-4739-9fd6-48d5e91ae531", "size_total": 10434990080, "block_total": 2547605, "mount": "/", "block_available": 1821173, "size_available": 7459524608, "fstype": "ext4", "inode_total": 654080, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-root", "inode_used": 57319, "block_size": 4096, "inode_available": 596761}, {"block_used": 73779, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "size_total": 2046640128, "block_total": 499668, "mount": "/boot", "block_available": 425889, "size_available": 1744441344, "fstype": "ext4", "inode_total": 131072, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/sda1", "inode_used": 353, "block_size": 4096, "inode_available": 130719}, {"block_used": 59318, "uuid": "448b53b9-3193-40d6-a9e4-8eea58184ff3", "size_total": 4061331456, "block_total": 991536, "mount": "/home", "block_available": 932218, "size_available": 3818364928, "fstype": "ext4", "inode_total": 256000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-home", "inode_used": 32, "block_size": 4096, "inode_available": 255968}, {"block_used": 539099, "uuid": "00fdfc93-06a0-4049-a073-c5f715f53604", "size_total": 6208094208, "block_total": 1515648, "mount": "/var", "block_available": 976549, "size_available": 3999944704, "fstype": "ext4", "inode_total": 389376, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var", "inode_used": 7344, "block_size": 4096, "inode_available": 382032}, {"block_used": 2356525, "uuid": "fdb049bd-4891-444b-8645-be3db36d3a8c", "size_total": 14726512640, "block_total": 3595340, "mount": "/var/log", "block_available": 1238815, "size_available": 5074186240, "fstype": "ext4", "inode_total": 896000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var_log", "inode_used": 358, "block_size": 4096, "inode_available": 895642}, {"block_used": 4535868, "uuid": "32497aec-4047-47a6-98c7-23ff26ff981e", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker", "block_available": 3323589, "size_available": 13613420544, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 951522, "block_size": 4096, "inode_available": 14775070}, {"block_used": 4535868, "uuid": "32497aec-4047-47a6-98c7-23ff26ff981e", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/containers", "block_available": 3323589, "size_available": 13613420544, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 951522, "block_size": 4096, "inode_available": 14775070}, {"block_used": 4535868, "uuid": "32497aec-4047-47a6-98c7-23ff26ff981e", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/overlay2", "block_available": 3323589, "size_available": 13613420544, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 951522, "block_size": 4096, "inode_available": 14775070}], "ansible_vethd510d7ef": {"macaddress": "fe:84:c3:c9:b1:f2", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd510d7ef", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fc84:c3ff:fec9:b1f2"}], "active": true, "speed": 10000}, "ansible_nodename": "sp-os-node05.os.ad.scanplus.de", "ansible_distribution_file_search_string": "Red Hat", "ansible_lvm": {"pvs": {"/dev/sdb1": {"free_g": "0", "size_g": "30.00", "vg": "vg0-docker"}, "/dev/sdc": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sda2": {"free_g": "0", "size_g": "28.00", "vg": "vg01"}}, "lvs": {"swap": {"size_g": "4.00", "vg": "vg01"}, "dockerlv": {"size_g": "30.00", "vg": "vg0-docker"}, "var_log": {"size_g": "13.99", "vg": "vg01"}, "var": {"size_g": "6.00", "vg": "vg01"}, "home": {"size_g": "4.00", "vg": "vg01"}, "root": {"size_g": "10.00", "vg": "vg01"}}, "vgs": {"vg01": {"free_g": "0", "size_g": "37.99", "num_lvs": "5", "num_pvs": "2"}, "vg0-docker": {"free_g": "0", "size_g": "30.00", "num_lvs": "1", "num_pvs": "1"}}}, "ansible_domain": "os.ad.scanplus.de", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAICoqJKPOYNC+I10x91QpC4wCFjfXCH/9G5KXlzpRDxJS", "ansible_processor_cores": 1, "ansible_vethf106f1c5": {"macaddress": "0e:4c:56:73:0d:67", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf106f1c5", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c4c:56ff:fe73:d67"}], "active": true, "speed": 10000}, "ansible_bios_version": "6.00", "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20190109T153928", "tz": "CET", "weeknumber": "01", "hour": "15", "year": "2019", "minute": "39", "tz_offset": "+0100", "month": "01", "epoch": "1547044768", "iso8601_micro": "2019-01-09T14:39:28.757413Z", "weekday": "Wednesday", "time": "15:39:28", "date": "2019-01-09", "iso8601": "2019-01-09T14:39:28Z", "day": "09", "iso8601_basic": "20190109T153928757287", "second": "28"}, "ansible_lsb": {"release": "7.5", "major_release": "7", "codename": "Maipo", "id": "RedHatEnterpriseServer", "description": "Red Hat Enterprise Linux Server release 7.5 (Maipo)"}, "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_veth21a1b22a": {"macaddress": "d6:80:4a:eb:30:4b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth21a1b22a", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d480:4aff:feeb:304b"}], "active": true, "speed": 10000}, "ansible_product_name": "VMware Virtual Platform", "ansible_vethb341ff22": {"macaddress": "fe:37:96:af:3f:0e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethb341ff22", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fc37:96ff:feaf:3f0e"}], "active": true, "speed": 10000}, "ansible_devices": {"sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "NECVMWar", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: VMware SATA AHCI controller", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "VMware SATA CD00", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "labels": [], "ids": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "uuids": []}, "sectors": "58718208", "start": "4196352", "holders": ["vg01-swap", "vg01-root", "vg01-home", "vg01-var", "vg01-var_log"], "size": "28.00 GB"}, "sda1": {"sectorsize": 512, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"]}, "sectors": "4194304", "start": "2048", "holders": [], "size": "2.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdb": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdb1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-5"], "labels": [], "ids": ["lvm-pv-uuid-LpeoGP-QtL3-fI2m-0HYT-CA61-G4b1-H1QDOb"], "uuids": []}, "sectors": "62912512", "start": "2048", "holders": ["vg0--docker-dockerlv"], "size": "30.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdc": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-4"], "labels": [], "ids": ["lvm-pv-uuid-hXY0sc-EOgd-bCel-PkRW-pFfk-7YtJ-ittbTA"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "10.00 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "29343744", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "uuids": ["fdb049bd-4891-444b-8645-be3db36d3a8c"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "13.99 GB"}, "dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62906368", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-RDZWYzSM5xsHCbccZyRXFjRVQQqwhBn53R3kPu0OYnkcwd8mwqCs7aEm6qf9hVkn"], "uuids": ["32497aec-4047-47a6-98c7-23ff26ff981e"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "30.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8380416", "links": {"masters": [], "labels": ["lv_home"], "ids": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "uuids": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "12582912", "links": {"masters": [], "labels": ["lv_var"], "ids": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "uuids": ["00fdfc93-06a0-4049-a073-c5f715f53604"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "6.00 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8396800", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "uuids": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": ["lv_root"], "ids": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"], "uuids": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}}, "ansible_user_uid": 0, "ansible_veth9629eee4": {"macaddress": "6a:ee:48:39:65:f7", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth9629eee4", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::68ee:48ff:fe39:65f7"}], "active": true, "speed": 10000}, "ansible_vethd38a8b6a": {"macaddress": "5a:cd:4c:5a:2e:22", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd38a8b6a", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::58cd:4cff:fe5a:2e22"}], "active": true, "speed": 10000}, "ansible_bios_date": "09/21/2015", "ansible_veth0c918137": {"macaddress": "76:bf:e7:f3:15:6c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth0c918137", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::74bf:e7ff:fef3:156c"}], "active": true, "speed": 10000}, "ansible_veth22b20693": {"macaddress": "0e:8c:d0:86:10:59", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth22b20693", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c8c:d0ff:fe86:1059"}], "active": true, "speed": 10000}, "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_veth80b7e385": {"macaddress": "1e:55:16:de:6d:7c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth80b7e385", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1c55:16ff:fede:6d7c"}], "active": true, "speed": 10000}, "ansible_veth96c0d962": {"macaddress": "0a:2e:2b:1e:99:c9", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth96c0d962", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::82e:2bff:fe1e:99c9"}], "active": true, "speed": 10000}, "ansible_br0": {"macaddress": "42:50:be:c4:35:41", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "br0", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}}}\n', '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nUSE OF THIS COMPUTER SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS SYSTEM.\nUNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION.\nEVIDENCE OF UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE, CRIMINAL, OR OTHER ADVERSE ACTION.\nUSE OF THIS SYSTEM CONSTITUTES CONSENT TO MONITORING FOR THESE PURPOSES.\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n') (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"module_setup": true, "ansible_distribution_version": "7.5", "ansible_veth99da462f": {"macaddress": "42:fd:4a:06:21:31", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth99da462f", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::40fd:4aff:fe06:2131"}], "active": true, "speed": 10000}, "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LESSOPEN": "||/usr/bin/lesspipe.sh %s", "SSH_CLIENT": "172.30.80.240 35258 22", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "root", "USER": "root", "QTDIR": "/usr/lib64/qt-3.3", "PATH": "/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "LANG": "en_US.UTF-8", "QTLIB": "/usr/lib64/qt-3.3/lib", "SHELL": "/bin/bash", "QTINC": "/usr/lib64/qt-3.3/include", "HOME": "/root", "XDG_RUNTIME_DIR": "/run/user/0", "SELINUX_ROLE_REQUESTED": "", "QT_GRAPHICSSYSTEM_CHECKED": "1", "XDG_SESSION_ID": "34032", "_": "/usr/bin/python", "SELINUX_LEVEL_REQUESTED": "", "SHLVL": "2", "PWD": "/root", "MAIL": "/var/mail/root", "SSH_CONNECTION": "172.30.80.240 35258 172.29.80.170 22"}, "ansible_vethb91cd912": {"macaddress": "3e:df:65:9d:53:57", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethb91cd912", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3cdf:65ff:fe9d:5357"}], "active": true, "speed": 10000}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "00:50:56:8a:e0:f8", "network": "172.29.80.0", "mtu": 1500, "broadcast": "172.29.80.255", "alias": "ens192", "netmask": "255.255.255.0", "address": "172.29.80.170", "interface": "ens192", "type": "ether", "gateway": "172.29.80.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-862.11.6.el7.x86_64", "quiet": true, "vconsole.font": "latarcyrheb-sun16", "rhgb": true, "rd.lvm.lv": "vg01/root", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/vg01-root", "vconsole.keymap": "de"}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_ovs_system": {"macaddress": "32:43:0e:b2:a0:f4", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1500, "device": "ovs-system", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "420A94E6-C957-CF85-F333-61A93D27FF6D", "ansible_pkg_mgr": "yum", "ansible_service_mgr": "systemd", "ansible_veth1f31a131": {"macaddress": "b6:a9:1c:3a:a4:be", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth1f31a131", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b4a9:1cff:fe3a:a4be"}], "active": true, "speed": 10000}, "ansible_distribution": "RedHat", "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:68cba7f8c732", "ansible_vethe834cc65": {"macaddress": "3a:b6:7f:62:20:ba", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethe834cc65", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::38b6:7fff:fe62:20ba"}], "active": true, "speed": 10000}, "ansible_vethb3f82ac4": {"macaddress": "fa:3a:dc:c7:7c:4e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethb3f82ac4", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f83a:dcff:fec7:7c4e"}], "active": true, "speed": 10000}, "ansible_all_ipv6_addresses": ["fe80::607e:e1ff:fee4:29ef", "fe80::701b:42ff:fef5:b380", "fe80::380d:4eff:feda:c11d", "fe80::ccb6:7bff:fe5b:7e00", "fe80::e8a2:61ff:feb6:4187", "fe80::3825:c5ff:feeb:4472", "fe80::1800:4eff:fe37:a380", "fe80::8c80:2ff:fe7d:46ec", "fe80::30ae:daff:fe83:5613", "fe80::ec79:e5ff:fe28:499d", "fe80::2c9a:9dff:fe06:d89e", "fe80::2c01:d0ff:fe5d:fe1f", "fe80::709f:70ff:fe47:c21", "fe80::fc50:19ff:fe93:ef79", "fe80::44b7:eeff:fe78:396b", "fe80::f83a:dcff:fec7:7c4e", "fe80::836:3bff:feaa:6a53", "fe80::bcad:71ff:fe30:12c0", "fe80::38b6:7fff:fe62:20ba", "fe80::851:5eff:fe8d:ef0a", "fe80::d453:c3ff:fea0:2152", "fe80::b4a9:1cff:fe3a:a4be", "fe80::8401:d0ff:fec6:7383", "fe80::b7:c5ff:fe7d:19f8", "fe80::3cdf:65ff:fe9d:5357", "fe80::8c64:9ff:fecd:92cc", "fe80::250:56ff:fe8a:e0f8", "fe80::7ca2:4bff:feb1:bb29", "fe80::889a:a5ff:fe12:b6b5", "fe80::d4f2:c0ff:fea1:b05c", "fe80::c4a6:fcff:feba:1b5c", "fe80::40fd:4aff:fe06:2131"], "ansible_uptime_seconds": 10165037, "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", "ansible_veth32c1acb6": {"macaddress": "3a:0d:4e:da:c1:1d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth32c1acb6", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::380d:4eff:feda:c11d"}], "active": true, "speed": 10000}, "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_user_shell": "/bin/bash", "ansible_veth9b51134d": {"macaddress": "72:1b:42:f5:b3:80", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth9b51134d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::701b:42ff:fef5:b380"}], "active": true, "speed": 10000}, "ansible_product_serial": "VMware-42 0a 94 e6 c9 57 cf 85-f3 33 61 a9 3d 27 ff 6d", "ansible_form_factor": "Other", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_local": {"openshift": {"node": {"schedulable": "false", "labels": {"nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75"}, "proxy_mode": "iptables", "dns_ip": "172.29.80.170", "bootstrapped": true}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "master": {}, "common": {"portal_net": "172.18.128.0/17", "etcd_runtime": "host", "is_etcd_system_container": false, "deployment_subtype": "basic", "is_master_system_container": false, "is_containerized": false, "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise"}, "cloudprovider": {}}}, "ansible_vxlan_sys_4789": {"macaddress": "fe:50:19:93:ef:79", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "off [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65000, "device": "vxlan_sys_4789", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fc50:19ff:fe93:ef79"}], "active": true, "type": "ether"}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:a3:77:87:f9", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "on", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": [], "id": "8000.0242a37787f9", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "active": false, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "2", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "3", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "4", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "5", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "6", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "7", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz"], "ansible_vethf5604af3": {"macaddress": "8a:9a:a5:12:b6:b5", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf5604af3", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::889a:a5ff:fe12:b6b5"}], "active": true, "speed": 10000}, "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLbJkyM6cDggxKUz8+O1LqredFpVi6sqdaSXAqNFbuyveuYFIHEhixjX7YKMbqcDdebQmrxu0SR9JB/ZVF3Zi0o=", "ansible_vethd78d1376": {"macaddress": "86:01:d0:c6:73:83", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd78d1376", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8401:d0ff:fec6:7383"}], "active": true, "speed": 10000}, "ansible_veth51bfb543": {"macaddress": "72:9f:70:47:0c:21", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth51bfb543", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::709f:70ff:fe47:c21"}], "active": true, "speed": 10000}, "ansible_user_gid": 0, "ansible_system_vendor": "VMware, Inc.", "ansible_vethe2d27073": {"macaddress": "46:b7:ee:78:39:6b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethe2d27073", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::44b7:eeff:fe78:396b"}], "active": true, "speed": 10000}, "ansible_veth59fe49c8": {"macaddress": "2e:01:d0:5d:fe:1f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth59fe49c8", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::2c01:d0ff:fe5d:fe1f"}], "active": true, "speed": 10000}, "ansible_swaptotal_mb": 0, "ansible_vethfa74b3a7": {"macaddress": "3a:25:c5:eb:44:72", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethfa74b3a7", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3825:c5ff:feeb:4472"}], "active": true, "speed": 10000}, "ansible_distribution_major_version": "7", "ansible_veth4e344db2": {"macaddress": "be:ad:71:30:12:c0", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth4e344db2", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::bcad:71ff:fe30:12c0"}], "active": true, "speed": 10000}, "ansible_real_group_id": 0, "ansible_lsb": {"release": "7.5", "major_release": "7", "codename": "Maipo", "id": "RedHatEnterpriseServer", "description": "Red Hat Enterprise Linux Server release 7.5 (Maipo)"}, "ansible_tun0": {"macaddress": "0a:36:3b:aa:6a:53", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "tun0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.18.23.255", "netmask": "255.255.254.0", "network": "172.18.22.0", "address": "172.18.22.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::836:3bff:feaa:6a53"}], "active": true, "type": "ether"}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC+tqFTktrO6SO6UU7cs8zwQrynXdJRrwzLnL/i3pHEL3I1kvTeIe7llBBvLckKPZ9ztjhNfWEoethT4fBA3Uwa7/4B3PCR9ZKQYK3x9ejkWWXEAiszk3ItCGxGEqQXK5Yk1nvCemob/4ImfD59DuMytbHb9jPNypn2c1niJcIvqyJOIMQj0cIGLDv+ZlnrNjiom+rarf80ICSVhMh1n8xfXO4bl/qQm0CG45LkLeJbV+0BORX8NYVMUXkGNNf/escvR9booguIoHVTqr5CPhNrlb3WomaJrXVaiwa6J94bNSL4MfxF7KhExjYO0dQ5/NI6gfj6af2AiE/ebrd80USL", "ansible_user_gecos": "root", "ansible_ens192": {"macaddress": "00:50:56:8a:e0:f8", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "on", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:0b:00.0", "module": "vmxnet3", "mtu": 1500, "device": "ens192", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.29.80.255", "netmask": "255.255.255.0", "network": "172.29.80.0", "address": "172.29.80.170"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::250:56ff:fe8a:e0f8"}], "active": true, "speed": 10000, "hw_timestamp_filters": []}, "ansible_processor_threads_per_core": 1, "ansible_system": "Linux", "ansible_veth5277aac3": {"macaddress": "d6:f2:c0:a1:b0:5c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth5277aac3", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d4f2:c0ff:fea1:b05c"}], "active": true, "speed": 10000}, "ansible_all_ipv4_addresses": ["172.17.0.1", "172.18.22.1", "172.29.80.170"], "ansible_python_version": "2.7.5", "ansible_product_version": "None", "ansible_veth758fa261": {"macaddress": "2e:9a:9d:06:d8:9e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth758fa261", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::2c9a:9dff:fe06:d89e"}], "active": true, "speed": 10000}, "ansible_memory_mb": {"real": {"total": 15868, "used": 15712, "free": 156}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 5239, "free": 10629}}, "ansible_user_dir": "/root", "ansible_vethd5a6c004": {"macaddress": "62:7e:e1:e4:29:ef", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd5a6c004", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::607e:e1ff:fee4:29ef"}], "active": true, "speed": 10000}, "gather_subset": ["all"], "ansible_real_user_id": 0, "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["172.29.80.170"], "search": ["cluster.local", "os.ad.scanplus.de"]}, "ansible_effective_group_id": 0, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 15868, "ansible_device_links": {"masters": {"sdb1": ["dm-2"], "sda2": ["dm-0", "dm-1", "dm-3", "dm-4", "dm-5"], "sdc": ["dm-5"]}, "labels": {"dm-4": ["lv_var"], "dm-3": ["lv_home"], "dm-1": ["lv_root"]}, "ids": {"sdb1": ["lvm-pv-uuid-Wd3c8u-V1qR-Ma8u-L1TE-ROfE-V7Y2-gZr3tE"], "sr0": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "sda2": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "sdc": ["lvm-pv-uuid-paAsvZ-tIID-cxtN-CvU8-5flC-Bzu7-8JN1cR"], "dm-4": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "dm-5": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "dm-2": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-kV55q73Ai3EX0V0o16DB4IpRoSW1r3R1xZ0ms3baED10u4RCds0q0qRn3Lxnph8Q"], "dm-3": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "dm-0": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "dm-1": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"]}, "uuids": {"sda1": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"], "dm-4": ["00fdfc93-06a0-4049-a073-c5f715f53604"], "dm-5": ["fdb049bd-4891-444b-8645-be3db36d3a8c"], "dm-2": ["bb6074d5-e91c-4e15-8197-dfd1be14f829"], "dm-3": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"], "dm-0": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"], "dm-1": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_vethe6bbf1fa": {"macaddress": "02:b7:c5:7d:19:f8", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethe6bbf1fa", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b7:c5ff:fe7d:19f8"}], "active": true, "speed": 10000}, "ansible_vethefea2995": {"macaddress": "8e:64:09:cd:92:cc", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethefea2995", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8c64:9ff:fecd:92cc"}], "active": true, "speed": 10000}, "ansible_veth9b0159bd": {"macaddress": "32:ae:da:83:56:13", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth9b0159bd", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::30ae:daff:fe83:5613"}], "active": true, "speed": 10000}, "ansible_memfree_mb": 156, "ansible_processor_count": 8, "ansible_hostname": "sp-os-node09", "ansible_interfaces": ["vethe834cc65", "ovs-system", "tun0", "vethb5d5eac8", "veth0cfe698e", "veth99da462f", "vethb91cd912", "veth59fe49c8", "vethe2d27073", "lo", "vxlan_sys_4789", "veth26f680ab", "veth0b97f7d8", "vethd5a6c004", "docker0", "veth9b51134d", "vethb3f82ac4", "vethe6bbf1fa", "br0", "veth5035daca", "vethefea2995", "veth32c1acb6", "veth79f1dfa7", "vethd78d1376", "veth1f31a131", "vethf5604af3", "vethf765126a", "veth95a9fa01", "veth923ea452", "veth4e344db2", "veth5277aac3", "veth758fa261", "ens192", "veth9b0159bd", "vethfa74b3a7", "veth51bfb543"], "ansible_machine_id": "d768f1f16c8043df9d09ccf8ab47a75c", "ansible_veth923ea452": {"macaddress": "ea:a2:61:b6:41:87", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth923ea452", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e8a2:61ff:feb6:4187"}], "active": true, "speed": 10000}, "ansible_fqdn": "sp-os-node09.os.ad.scanplus.de", "ansible_mounts": [{"block_used": 708982, "uuid": "e36965f9-43c4-4739-9fd6-48d5e91ae531", "size_total": 10434990080, "block_total": 2547605, "mount": "/", "block_available": 1838623, "size_available": 7530999808, "fstype": "ext4", "inode_total": 654080, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-root", "inode_used": 57358, "block_size": 4096, "inode_available": 596722}, {"block_used": 73744, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "size_total": 2046640128, "block_total": 499668, "mount": "/boot", "block_available": 425924, "size_available": 1744584704, "fstype": "ext4", "inode_total": 131072, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/sda1", "inode_used": 353, "block_size": 4096, "inode_available": 130719}, {"block_used": 59318, "uuid": "448b53b9-3193-40d6-a9e4-8eea58184ff3", "size_total": 4061331456, "block_total": 991536, "mount": "/home", "block_available": 932218, "size_available": 3818364928, "fstype": "ext4", "inode_total": 256000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-home", "inode_used": 32, "block_size": 4096, "inode_available": 255968}, {"block_used": 518075, "uuid": "00fdfc93-06a0-4049-a073-c5f715f53604", "size_total": 6208094208, "block_total": 1515648, "mount": "/var", "block_available": 997573, "size_available": 4086059008, "fstype": "ext4", "inode_total": 389376, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var", "inode_used": 5645, "block_size": 4096, "inode_available": 383731}, {"block_used": 2916929, "uuid": "fdb049bd-4891-444b-8645-be3db36d3a8c", "size_total": 14726512640, "block_total": 3595340, "mount": "/var/log", "block_available": 678411, "size_available": 2778771456, "fstype": "ext4", "inode_total": 896000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var_log", "inode_used": 1309, "block_size": 4096, "inode_available": 894691}, {"block_used": 6488708, "uuid": "bb6074d5-e91c-4e15-8197-dfd1be14f829", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker", "block_available": 1370749, "size_available": 5614587904, "fstype": "xfs", "inode_total": 11652328, "options": "rw,seclabel,relatime,attr2,inode64,prjquota", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 686019, "block_size": 4096, "inode_available": 10966309}, {"block_used": 6488708, "uuid": "bb6074d5-e91c-4e15-8197-dfd1be14f829", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/containers", "block_available": 1370749, "size_available": 5614587904, "fstype": "xfs", "inode_total": 11652328, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 686019, "block_size": 4096, "inode_available": 10966309}, {"block_used": 6488708, "uuid": "bb6074d5-e91c-4e15-8197-dfd1be14f829", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/overlay2", "block_available": 1370749, "size_available": 5614587904, "fstype": "xfs", "inode_total": 11652328, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 686019, "block_size": 4096, "inode_available": 10966309}], "ansible_veth79f1dfa7": {"macaddress": "0a:51:5e:8d:ef:0a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth79f1dfa7", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::851:5eff:fe8d:ef0a"}], "active": true, "speed": 10000}, "ansible_nodename": "sp-os-node09.os.ad.scanplus.de", "ansible_distribution_file_search_string": "Red Hat", "ansible_lvm": {"pvs": {"/dev/sdb1": {"free_g": "0", "size_g": "30.00", "vg": "vg0-docker"}, "/dev/sdc": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sda2": {"free_g": "0", "size_g": "28.00", "vg": "vg01"}}, "lvs": {"swap": {"size_g": "4.00", "vg": "vg01"}, "dockerlv": {"size_g": "30.00", "vg": "vg0-docker"}, "var_log": {"size_g": "13.99", "vg": "vg01"}, "var": {"size_g": "6.00", "vg": "vg01"}, "home": {"size_g": "4.00", "vg": "vg01"}, "root": {"size_g": "10.00", "vg": "vg01"}}, "vgs": {"vg01": {"free_g": "0", "size_g": "37.99", "num_lvs": "5", "num_pvs": "2"}, "vg0-docker": {"free_g": "0", "size_g": "30.00", "num_lvs": "1", "num_pvs": "1"}}}, "ansible_vethb5d5eac8": {"macaddress": "7e:a2:4b:b1:bb:29", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethb5d5eac8", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7ca2:4bff:feb1:bb29"}], "active": true, "speed": 10000}, "ansible_domain": "os.ad.scanplus.de", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_veth0b97f7d8": {"macaddress": "c6:a6:fc:ba:1b:5c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth0b97f7d8", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c4a6:fcff:feba:1b5c"}], "active": true, "speed": 10000}, "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAINwhOt7bNqpXgzcGliniea29WeeWmkpEcH2a8D9u2chX", "ansible_processor_cores": 1, "ansible_veth26f680ab": {"macaddress": "1a:00:4e:37:a3:80", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth26f680ab", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1800:4eff:fe37:a380"}], "active": true, "speed": 10000}, "ansible_bios_version": "6.00", "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20190109T153929", "tz": "CET", "weeknumber": "01", "hour": "15", "year": "2019", "minute": "39", "tz_offset": "+0100", "month": "01", "epoch": "1547044769", "iso8601_micro": "2019-01-09T14:39:29.497826Z", "weekday": "Wednesday", "time": "15:39:29", "date": "2019-01-09", "iso8601": "2019-01-09T14:39:29Z", "day": "09", "iso8601_basic": "20190109T153929497749", "second": "29"}, "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_product_name": "VMware Virtual Platform", "ansible_devices": {"sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "NECVMWar", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: VMware SATA AHCI controller", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "VMware SATA CD00", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-3", "dm-4", "dm-5"], "labels": [], "ids": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "uuids": []}, "sectors": "58718208", "start": "4196352", "holders": ["vg01-swap", "vg01-root", "vg01-home", "vg01-var", "vg01-var_log"], "size": "28.00 GB"}, "sda1": {"sectorsize": 512, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"]}, "sectors": "4194304", "start": "2048", "holders": [], "size": "2.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdb": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdb1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-2"], "labels": [], "ids": ["lvm-pv-uuid-Wd3c8u-V1qR-Ma8u-L1TE-ROfE-V7Y2-gZr3tE"], "uuids": []}, "sectors": "62912512", "start": "2048", "holders": ["vg0--docker-dockerlv"], "size": "30.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdc": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-5"], "labels": [], "ids": ["lvm-pv-uuid-paAsvZ-tIID-cxtN-CvU8-5flC-Bzu7-8JN1cR"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "10.00 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "12582912", "links": {"masters": [], "labels": ["lv_var"], "ids": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "uuids": ["00fdfc93-06a0-4049-a073-c5f715f53604"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "6.00 GB"}, "dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "29343744", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "uuids": ["fdb049bd-4891-444b-8645-be3db36d3a8c"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "13.99 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62906368", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-kV55q73Ai3EX0V0o16DB4IpRoSW1r3R1xZ0ms3baED10u4RCds0q0qRn3Lxnph8Q"], "uuids": ["bb6074d5-e91c-4e15-8197-dfd1be14f829"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "30.00 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8380416", "links": {"masters": [], "labels": ["lv_home"], "ids": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "uuids": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8396800", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "uuids": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": ["lv_root"], "ids": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"], "uuids": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}}, "ansible_user_uid": 0, "ansible_veth5035daca": {"macaddress": "ee:79:e5:28:49:9d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth5035daca", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ec79:e5ff:fe28:499d"}], "active": true, "speed": 10000}, "ansible_bios_date": "04/05/2016", "ansible_veth0cfe698e": {"macaddress": "d6:53:c3:a0:21:52", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth0cfe698e", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d453:c3ff:fea0:2152"}], "active": true, "speed": 10000}, "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_vethf765126a": {"macaddress": "8e:80:02:7d:46:ec", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf765126a", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8c80:2ff:fe7d:46ec"}], "active": true, "speed": 10000}, "ansible_veth95a9fa01": {"macaddress": "ce:b6:7b:5b:7e:00", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth95a9fa01", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ccb6:7bff:fe5b:7e00"}], "active": true, "speed": 10000}, "ansible_br0": {"macaddress": "9a:cd:03:68:44:4a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "br0", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}}}\n', '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nUSE OF THIS COMPUTER SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS SYSTEM.\nUNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION.\nEVIDENCE OF UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE, CRIMINAL, OR OTHER ADVERSE ACTION.\nUSE OF THIS SYSTEM CONSTITUTES CONSENT TO MONITORING FOR THESE PURPOSES.\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n') (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"module_setup": true, "ansible_distribution_version": "7.5", "ansible_veth466f7b8d": {"macaddress": "ca:84:60:d0:5b:16", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth466f7b8d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c884:60ff:fed0:5b16"}], "active": true, "speed": 10000}, "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LESSOPEN": "||/usr/bin/lesspipe.sh %s", "SSH_CLIENT": "172.30.80.240 55590 22", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "root", "USER": "root", "QTDIR": "/usr/lib64/qt-3.3", "PATH": "/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "LANG": "en_US.UTF-8", "QTLIB": "/usr/lib64/qt-3.3/lib", "SHELL": "/bin/bash", "QTINC": "/usr/lib64/qt-3.3/include", "HOME": "/root", "XDG_RUNTIME_DIR": "/run/user/0", "SELINUX_ROLE_REQUESTED": "", "QT_GRAPHICSSYSTEM_CHECKED": "1", "XDG_SESSION_ID": "27512", "_": "/usr/bin/python", "SELINUX_LEVEL_REQUESTED": "", "SHLVL": "2", "PWD": "/root", "MAIL": "/var/mail/root", "SSH_CONNECTION": "172.30.80.240 55590 172.29.80.172 22"}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "00:50:56:8a:bc:30", "network": "172.29.80.0", "mtu": 1500, "broadcast": "172.29.80.255", "alias": "ens192", "netmask": "255.255.255.0", "address": "172.29.80.172", "interface": "ens192", "type": "ether", "gateway": "172.29.80.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-862.11.6.el7.x86_64", "quiet": true, "vconsole.font": "latarcyrheb-sun16", "rhgb": true, "rd.lvm.lv": "vg01/root", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/vg01-root", "vconsole.keymap": "de"}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_vethefba95f4": {"macaddress": "46:c7:7f:06:e0:0b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethefba95f4", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::44c7:7fff:fe06:e00b"}], "active": true, "speed": 10000}, "ansible_ovs_system": {"macaddress": "02:89:b5:78:a6:36", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1500, "device": "ovs-system", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "420A99F6-3B4A-CA3B-3F8A-CBA56A5501E9", "ansible_pkg_mgr": "yum", "ansible_vethdfde1b45": {"macaddress": "76:83:bd:19:2b:9e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethdfde1b45", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7483:bdff:fe19:2b9e"}], "active": true, "speed": 10000}, "ansible_distribution": "RedHat", "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:4eed5bc1b3a2", "ansible_vetha3bbeb62": {"macaddress": "aa:fd:63:5b:95:5a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha3bbeb62", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a8fd:63ff:fe5b:955a"}], "active": true, "speed": 10000}, "ansible_all_ipv6_addresses": ["fe80::9076:47ff:fece:79c6", "fe80::a4f0:ccff:fe43:8090", "fe80::ecbe:64ff:fefe:26ee", "fe80::2090:86ff:fef5:191d", "fe80::fc50:9cff:fe39:76f", "fe80::5cff:cdff:fe70:3a5", "fe80::a8fd:63ff:fe5b:955a", "fe80::c41e:68ff:fefa:a1ef", "fe80::2454:f5ff:fe63:dd2b", "fe80::7cd9:62ff:fe82:ca96", "fe80::bc71:cfff:fefd:5080", "fe80::ec95:c3ff:fe71:77d", "fe80::7483:bdff:fe19:2b9e", "fe80::cc64:fdff:fe3a:13f1", "fe80::8cbc:9ff:fe9d:f8ed", "fe80::645d:82ff:fee5:13a4", "fe80::ee:acff:fe2e:55aa", "fe80::6409:daff:feb8:e1eb", "fe80::38b1:d4ff:fe8b:f0b2", "fe80::bc50:eaff:fe5d:e60e", "fe80::9c1a:e1ff:fedf:ca85", "fe80::44c7:7fff:fe06:e00b", "fe80::3cd9:99ff:feac:ecff", "fe80::489c:78ff:fea0:63f0", "fe80::250:56ff:fe8a:bc30", "fe80::b05d:1eff:fea9:bed5", "fe80::c89f:faff:feb7:e56c", "fe80::c884:60ff:fed0:5b16", "fe80::5077:28ff:fe5b:b828"], "ansible_uptime_seconds": 6851291, "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_veth238dce56": {"macaddress": "a6:f0:cc:43:80:90", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth238dce56", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a4f0:ccff:fe43:8090"}], "active": true, "speed": 10000}, "ansible_is_chroot": false, "ansible_user_shell": "/bin/bash", "ansible_product_serial": "VMware-42 0a 99 f6 3b 4a ca 3b-3f 8a cb a5 6a 55 01 e9", "ansible_form_factor": "Other", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_local": {"openshift": {"node": {"schedulable": "false", "labels": {"nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75"}, "proxy_mode": "iptables", "dns_ip": "172.29.80.172", "bootstrapped": true}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "master": {}, "common": {"portal_net": "172.18.128.0/17", "etcd_runtime": "host", "is_etcd_system_container": false, "deployment_subtype": "basic", "is_master_system_container": false, "is_containerized": false, "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise"}, "cloudprovider": {}}}, "ansible_vxlan_sys_4789": {"macaddress": "ee:95:c3:71:07:7d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "off [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65000, "device": "vxlan_sys_4789", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ec95:c3ff:fe71:77d"}], "active": true, "type": "ether"}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:f9:33:3f:99", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "on", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": [], "id": "8000.0242f9333f99", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "active": false, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "2", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "3", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "4", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "5", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "6", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "7", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBChynMst1dCfNzBI5dkYSU1dLZc07QoojTnb7h7V97oJcuTEuh75nGgSYraXO13V/tyqp5HDEZWs21J5xmUFcEs=", "ansible_user_gid": 0, "ansible_system_vendor": "VMware, Inc.", "ansible_swaptotal_mb": 0, "ansible_distribution_major_version": "7", "ansible_vethcc4175e6": {"macaddress": "ce:64:fd:3a:13:f1", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethcc4175e6", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::cc64:fdff:fe3a:13f1"}], "active": true, "speed": 10000}, "ansible_real_group_id": 0, "ansible_lsb": {"release": "7.5", "major_release": "7", "codename": "Maipo", "id": "RedHatEnterpriseServer", "description": "Red Hat Enterprise Linux Server release 7.5 (Maipo)"}, "ansible_veth64ab77d1": {"macaddress": "26:54:f5:63:dd:2b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth64ab77d1", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::2454:f5ff:fe63:dd2b"}], "active": true, "speed": 10000}, "ansible_vetha09a7338": {"macaddress": "92:76:47:ce:79:c6", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha09a7338", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::9076:47ff:fece:79c6"}], "active": true, "speed": 10000}, "ansible_tun0": {"macaddress": "66:09:da:b8:e1:eb", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "tun0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.18.29.255", "netmask": "255.255.254.0", "network": "172.18.28.0", "address": "172.18.28.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::6409:daff:feb8:e1eb"}], "active": true, "type": "ether"}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQCg+GTIN3EA/R/5fIAh9m7loyyt1OaRm1PlvjtP5oDHsmdgv2UEtWcw1aNswytiihAzoAGIt58LWFI/rabVvDM1whwmoA0b0iehYIYFfkMSUV70RSUVkrF3eTZN63x6+GkDQR4p2oFK0lUptdn9K7Z9QaBx9G5sAal1A+zjKByVIFbcKS0dkTzMuUnitUpDwevAl34V1uDnXdeQhFLW4WidYdQJMxssQK5uyOvjPaSLDIUgooEpsomwnET5VieNwf5/Y67BnLA5zYQIERdTat78hDayChYd4HgV/4ft0baZfFL1LLkBWcdmgs2+PQ9+5kzX6HBWH6f76s0SEp8NjHQz", "ansible_veth0ed65f1e": {"macaddress": "52:77:28:5b:b8:28", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth0ed65f1e", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::5077:28ff:fe5b:b828"}], "active": true, "speed": 10000}, "ansible_ens192": {"macaddress": "00:50:56:8a:bc:30", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "on", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:0b:00.0", "module": "vmxnet3", "mtu": 1500, "device": "ens192", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.29.80.255", "netmask": "255.255.255.0", "network": "172.29.80.0", "address": "172.29.80.172"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::250:56ff:fe8a:bc30"}], "active": true, "speed": 10000, "hw_timestamp_filters": []}, "ansible_processor_threads_per_core": 1, "ansible_veth94a53767": {"macaddress": "fe:50:9c:39:07:6f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth94a53767", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fc50:9cff:fe39:76f"}], "active": true, "speed": 10000}, "ansible_all_ipv4_addresses": ["172.17.0.1", "172.18.28.1", "172.29.80.172"], "ansible_python_version": "2.7.5", "ansible_veth0e3c43df": {"macaddress": "22:90:86:f5:19:1d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth0e3c43df", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::2090:86ff:fef5:191d"}], "active": true, "speed": 10000}, "ansible_product_version": "None", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 15868, "used": 15428, "free": 440}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 4812, "free": 11056}}, "ansible_user_dir": "/root", "gather_subset": ["all"], "ansible_veth8b7fe81f": {"macaddress": "8e:bc:09:9d:f8:ed", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth8b7fe81f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8cbc:9ff:fe9d:f8ed"}], "active": true, "speed": 10000}, "ansible_real_user_id": 0, "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["172.29.80.172"], "search": ["cluster.local", "os.ad.scanplus.de"]}, "ansible_effective_group_id": 0, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 15868, "ansible_device_links": {"masters": {"sdb1": ["dm-5"], "sda2": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "sdc": ["dm-4"]}, "labels": {"dm-2": ["lv_home"], "dm-3": ["lv_var"], "dm-1": ["lv_root"]}, "ids": {"sdb1": ["lvm-pv-uuid-gF7oeS-uLEA-cRxf-WkTi-Hy50-R9rn-3oMR5T"], "sr0": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "sda2": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "sdc": ["lvm-pv-uuid-LaXUTM-VMkJ-0hdY-5AGP-ZDJr-lcCe-CVM6xB"], "dm-4": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "dm-5": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-Yf4VQisbdkM02Ne3LWtwPh144BiDcRIaXeOXJ050zwIkIdiRGARtq1hKUdMwgmhM"], "dm-2": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "dm-3": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "dm-0": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "dm-1": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"]}, "uuids": {"sda1": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"], "dm-4": ["fdb049bd-4891-444b-8645-be3db36d3a8c"], "dm-5": ["30d12b62-3e5f-4dd3-aafe-d16e60ded060"], "dm-2": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"], "dm-3": ["00fdfc93-06a0-4049-a073-c5f715f53604"], "dm-0": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"], "dm-1": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_vethbd6fd44f": {"macaddress": "be:71:cf:fd:50:80", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethbd6fd44f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::bc71:cfff:fefd:5080"}], "active": true, "speed": 10000}, "ansible_vethec7dceca": {"macaddress": "3e:d9:99:ac:ec:ff", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethec7dceca", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3cd9:99ff:feac:ecff"}], "active": true, "speed": 10000}, "ansible_veth150cb0a0": {"macaddress": "ca:9f:fa:b7:e5:6c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth150cb0a0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c89f:faff:feb7:e56c"}], "active": true, "speed": 10000}, "ansible_vethb60c9126": {"macaddress": "c6:1e:68:fa:a1:ef", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethb60c9126", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c41e:68ff:fefa:a1ef"}], "active": true, "speed": 10000}, "ansible_memfree_mb": 440, "ansible_product_name": "VMware Virtual Platform", "ansible_veth85889fd0": {"macaddress": "5e:ff:cd:70:03:a5", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth85889fd0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::5cff:cdff:fe70:3a5"}], "active": true, "speed": 10000}, "ansible_veth0573ddc3": {"macaddress": "ee:be:64:fe:26:ee", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth0573ddc3", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ecbe:64ff:fefe:26ee"}], "active": true, "speed": 10000}, "ansible_processor_count": 8, "ansible_hostname": "sp-os-node11", "ansible_veth31a2213d": {"macaddress": "b2:5d:1e:a9:be:d5", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth31a2213d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b05d:1eff:fea9:bed5"}], "active": true, "speed": 10000}, "ansible_interfaces": ["veth0573ddc3", "vetha09a7338", "veth85889fd0", "ovs-system", "tun0", "veth466f7b8d", "veth238dce56", "vethb60c9126", "veth5a94802d", "vethbd6fd44f", "veth24efec23", "veth974a20bc", "lo", "veth0ed65f1e", "vxlan_sys_4789", "veth31a2213d", "veth0e3c43df", "vetha3bbeb62", "docker0", "veth150cb0a0", "veth94a53767", "veth6fa17a43", "br0", "vethdfde1b45", "veth19b93b1a", "vethefba95f4", "vethcc4175e6", "veth6dd8ae16", "vethd1ff8703", "veth8b7fe81f", "vethec7dceca", "veth64ab77d1", "ens192"], "ansible_machine_id": "d768f1f16c8043df9d09ccf8ab47a75c", "ansible_veth6fa17a43": {"macaddress": "3a:b1:d4:8b:f0:b2", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth6fa17a43", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::38b1:d4ff:fe8b:f0b2"}], "active": true, "speed": 10000}, "ansible_fqdn": "sp-os-node11.os.ad.scanplus.de", "ansible_mounts": [{"block_used": 710455, "uuid": "e36965f9-43c4-4739-9fd6-48d5e91ae531", "size_total": 10434990080, "block_total": 2547605, "mount": "/", "block_available": 1837150, "size_available": 7524966400, "fstype": "ext4", "inode_total": 654080, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-root", "inode_used": 57358, "block_size": 4096, "inode_available": 596722}, {"block_used": 73746, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "size_total": 2046640128, "block_total": 499668, "mount": "/boot", "block_available": 425922, "size_available": 1744576512, "fstype": "ext4", "inode_total": 131072, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/sda1", "inode_used": 353, "block_size": 4096, "inode_available": 130719}, {"block_used": 59318, "uuid": "448b53b9-3193-40d6-a9e4-8eea58184ff3", "size_total": 4061331456, "block_total": 991536, "mount": "/home", "block_available": 932218, "size_available": 3818364928, "fstype": "ext4", "inode_total": 256000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-home", "inode_used": 32, "block_size": 4096, "inode_available": 255968}, {"block_used": 391305, "uuid": "00fdfc93-06a0-4049-a073-c5f715f53604", "size_total": 6208094208, "block_total": 1515648, "mount": "/var", "block_available": 1124343, "size_available": 4605308928, "fstype": "ext4", "inode_total": 389376, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var", "inode_used": 4946, "block_size": 4096, "inode_available": 384430}, {"block_used": 2430372, "uuid": "fdb049bd-4891-444b-8645-be3db36d3a8c", "size_total": 14726512640, "block_total": 3595340, "mount": "/var/log", "block_available": 1164968, "size_available": 4771708928, "fstype": "ext4", "inode_total": 896000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var_log", "inode_used": 1247, "block_size": 4096, "inode_available": 894753}, {"block_used": 6072324, "uuid": "30d12b62-3e5f-4dd3-aafe-d16e60ded060", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker", "block_available": 1787133, "size_available": 7320096768, "fstype": "xfs", "inode_total": 14960424, "options": "rw,seclabel,relatime,attr2,inode64,prjquota", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 643068, "block_size": 4096, "inode_available": 14317356}, {"block_used": 6072324, "uuid": "30d12b62-3e5f-4dd3-aafe-d16e60ded060", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/containers", "block_available": 1787133, "size_available": 7320096768, "fstype": "xfs", "inode_total": 14960424, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 643068, "block_size": 4096, "inode_available": 14317356}, {"block_used": 6072324, "uuid": "30d12b62-3e5f-4dd3-aafe-d16e60ded060", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/overlay2", "block_available": 1787133, "size_available": 7320096768, "fstype": "xfs", "inode_total": 14960424, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 643068, "block_size": 4096, "inode_available": 14317356}, {"block_used": 566742, "uuid": "N/A", "size_total": 538869497856, "block_total": 2055624, "mount": "/var/lib/origin/openshift.local.volumes/pods/7eceb400-b7a5-11e8-8f0c-005056aa3492/volumes/kubernetes.io~nfs/pv-scanplus-netbox-qa-static", "block_available": 1488882, "size_available": 390301483008, "fstype": "nfs4", "inode_total": 33423360, "options": "rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.29.80.172,local_lock=none,addr=172.30.80.251", "device": "172.30.80.251:/exports/netbox/qa/static", "inode_used": 144610, "block_size": 262144, "inode_available": 33278750}, {"block_used": 566742, "uuid": "N/A", "size_total": 538869497856, "block_total": 2055624, "mount": "/var/lib/origin/openshift.local.volumes/pods/73deb77e-feba-11e8-b7d6-005056aa3492/volumes/kubernetes.io~nfs/pv-scanplus-netbox-qa-media", "block_available": 1488882, "size_available": 390301483008, "fstype": "nfs4", "inode_total": 33423360, "options": "rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.29.80.172,local_lock=none,addr=172.30.80.251", "device": "172.30.80.251:/exports/netbox/qa/media", "inode_used": 144610, "block_size": 262144, "inode_available": 33278750}, {"block_used": 566742, "uuid": "N/A", "size_total": 538869497856, "block_total": 2055624, "mount": "/var/lib/origin/openshift.local.volumes/pods/73deb77e-feba-11e8-b7d6-005056aa3492/volumes/kubernetes.io~nfs/pv-scanplus-netbox-qa-static", "block_available": 1488882, "size_available": 390301483008, "fstype": "nfs4", "inode_total": 33423360, "options": "rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.29.80.172,local_lock=none,addr=172.30.80.251", "device": "172.30.80.251:/exports/netbox/qa/static", "inode_used": 144610, "block_size": 262144, "inode_available": 33278750}], "ansible_nodename": "sp-os-node11.os.ad.scanplus.de", "ansible_distribution_file_search_string": "Red Hat", "ansible_user_gecos": "root", "ansible_lvm": {"pvs": {"/dev/sdb1": {"free_g": "0", "size_g": "30.00", "vg": "vg0-docker"}, "/dev/sdc": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sda2": {"free_g": "0", "size_g": "28.00", "vg": "vg01"}}, "lvs": {"swap": {"size_g": "4.00", "vg": "vg01"}, "dockerlv": {"size_g": "30.00", "vg": "vg0-docker"}, "var_log": {"size_g": "13.99", "vg": "vg01"}, "var": {"size_g": "6.00", "vg": "vg01"}, "home": {"size_g": "4.00", "vg": "vg01"}, "root": {"size_g": "10.00", "vg": "vg01"}}, "vgs": {"vg01": {"free_g": "0", "size_g": "37.99", "num_lvs": "5", "num_pvs": "2"}, "vg0-docker": {"free_g": "0", "size_g": "30.00", "num_lvs": "1", "num_pvs": "1"}}}, "ansible_veth24efec23": {"macaddress": "9e:1a:e1:df:ca:85", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth24efec23", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::9c1a:e1ff:fedf:ca85"}], "active": true, "speed": 10000}, "ansible_veth19b93b1a": {"macaddress": "4a:9c:78:a0:63:f0", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth19b93b1a", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::489c:78ff:fea0:63f0"}], "active": true, "speed": 10000}, "ansible_domain": "os.ad.scanplus.de", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIK+KPE7n96DNlHrolIRI0/cmPHDY3hu/PwrP9Fou/QML", "ansible_processor_cores": 1, "ansible_veth6dd8ae16": {"macaddress": "7e:d9:62:82:ca:96", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth6dd8ae16", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7cd9:62ff:fe82:ca96"}], "active": true, "speed": 10000}, "ansible_bios_version": "6.00", "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20190109T153929", "tz": "CET", "weeknumber": "01", "hour": "15", "year": "2019", "minute": "39", "tz_offset": "+0100", "month": "01", "epoch": "1547044769", "iso8601_micro": "2019-01-09T14:39:29.499129Z", "weekday": "Wednesday", "time": "15:39:29", "date": "2019-01-09", "iso8601": "2019-01-09T14:39:29Z", "day": "09", "iso8601_basic": "20190109T153929499024", "second": "29"}, "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_veth974a20bc": {"macaddress": "be:50:ea:5d:e6:0e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth974a20bc", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::bc50:eaff:fe5d:e60e"}], "active": true, "speed": 10000}, "ansible_veth5a94802d": {"macaddress": "66:5d:82:e5:13:a4", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth5a94802d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::645d:82ff:fee5:13a4"}], "active": true, "speed": 10000}, "ansible_system": "Linux", "ansible_devices": {"sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "NECVMWar", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: VMware SATA AHCI controller", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "VMware SATA CD00", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "labels": [], "ids": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "uuids": []}, "sectors": "58718208", "start": "4196352", "holders": ["vg01-swap", "vg01-root", "vg01-home", "vg01-var", "vg01-var_log"], "size": "28.00 GB"}, "sda1": {"sectorsize": 512, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"]}, "sectors": "4194304", "start": "2048", "holders": [], "size": "2.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdb": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdb1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-5"], "labels": [], "ids": ["lvm-pv-uuid-gF7oeS-uLEA-cRxf-WkTi-Hy50-R9rn-3oMR5T"], "uuids": []}, "sectors": "62912512", "start": "2048", "holders": ["vg0--docker-dockerlv"], "size": "30.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdc": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-4"], "labels": [], "ids": ["lvm-pv-uuid-LaXUTM-VMkJ-0hdY-5AGP-ZDJr-lcCe-CVM6xB"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "10.00 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "29343744", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "uuids": ["fdb049bd-4891-444b-8645-be3db36d3a8c"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "13.99 GB"}, "dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62906368", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-Yf4VQisbdkM02Ne3LWtwPh144BiDcRIaXeOXJ050zwIkIdiRGARtq1hKUdMwgmhM"], "uuids": ["30d12b62-3e5f-4dd3-aafe-d16e60ded060"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "30.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8380416", "links": {"masters": [], "labels": ["lv_home"], "ids": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "uuids": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "12582912", "links": {"masters": [], "labels": ["lv_var"], "ids": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "uuids": ["00fdfc93-06a0-4049-a073-c5f715f53604"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "6.00 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8396800", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "uuids": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": ["lv_root"], "ids": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"], "uuids": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}}, "ansible_user_uid": 0, "ansible_vethd1ff8703": {"macaddress": "02:ee:ac:2e:55:aa", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd1ff8703", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ee:acff:fe2e:55aa"}], "active": true, "speed": 10000}, "ansible_bios_date": "04/05/2016", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_br0": {"macaddress": "72:3f:fe:bf:22:43", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "br0", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}}}\n', '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nUSE OF THIS COMPUTER SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS SYSTEM.\nUNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION.\nEVIDENCE OF UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE, CRIMINAL, OR OTHER ADVERSE ACTION.\nUSE OF THIS SYSTEM CONSTITUTES CONSENT TO MONITORING FOR THESE PURPOSES.\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n') (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_veth589d0f10": {"macaddress": "0e:c3:a9:c0:fb:13", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth589d0f10", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::cc3:a9ff:fec0:fb13"}], "active": true, "speed": 10000}, "ansible_vethf91e07c4": {"macaddress": "ea:ae:6d:b9:6c:19", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf91e07c4", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e8ae:6dff:feb9:6c19"}], "active": true, "speed": 10000}, "module_setup": true, "ansible_distribution_version": "7.5", "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LESSOPEN": "||/usr/bin/lesspipe.sh %s", "SSH_CLIENT": "172.30.80.240 36176 22", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "root", "USER": "root", "QTDIR": "/usr/lib64/qt-3.3", "PATH": "/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "LANG": "en_US.UTF-8", "QTLIB": "/usr/lib64/qt-3.3/lib", "SHELL": "/bin/bash", "QTINC": "/usr/lib64/qt-3.3/include", "HOME": "/root", "XDG_RUNTIME_DIR": "/run/user/0", "SELINUX_ROLE_REQUESTED": "", "QT_GRAPHICSSYSTEM_CHECKED": "1", "XDG_SESSION_ID": "34014", "_": "/usr/bin/python", "SELINUX_LEVEL_REQUESTED": "", "SHLVL": "2", "PWD": "/root", "MAIL": "/var/mail/root", "SSH_CONNECTION": "172.30.80.240 36176 172.29.80.173 22"}, "ansible_vethc26439c6": {"macaddress": "da:fb:ce:da:7c:b9", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc26439c6", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d8fb:ceff:feda:7cb9"}], "active": true, "speed": 10000}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "00:50:56:8a:b6:1b", "network": "172.29.80.0", "mtu": 1500, "broadcast": "172.29.80.255", "alias": "ens192", "netmask": "255.255.255.0", "address": "172.29.80.173", "interface": "ens192", "type": "ether", "gateway": "172.29.80.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-862.11.6.el7.x86_64", "quiet": true, "vconsole.font": "latarcyrheb-sun16", "rhgb": true, "rd.lvm.lv": "vg01/root", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/vg01-root", "vconsole.keymap": "de"}, "ansible_machine_id": "d768f1f16c8043df9d09ccf8ab47a75c", "ansible_veth364a8aef": {"macaddress": "c6:f1:27:35:6f:bc", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth364a8aef", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c4f1:27ff:fe35:6fbc"}], "active": true, "speed": 10000}, "ansible_ovs_system": {"macaddress": "da:d7:80:1a:74:a2", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1500, "device": "ovs-system", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "420A8C4A-345D-F026-5AA4-FAA908BB81B5", "ansible_pkg_mgr": "yum", "ansible_veth2c8a43b4": {"macaddress": "b2:7a:ef:58:d4:b7", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth2c8a43b4", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b07a:efff:fe58:d4b7"}], "active": true, "speed": 10000}, "ansible_distribution": "RedHat", "ansible_veth7fb4eb2d": {"macaddress": "3e:38:b8:47:9c:47", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth7fb4eb2d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3c38:b8ff:fe47:9c47"}], "active": true, "speed": 10000}, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:e784d38868cd", "ansible_all_ipv6_addresses": ["fe80::c4f1:27ff:fe35:6fbc", "fe80::9045:8cff:fe98:a706", "fe80::b07a:efff:fe58:d4b7", "fe80::f88d:17ff:fe99:9f36", "fe80::8405:b3ff:feca:b374", "fe80::bc0d:f2ff:fe6a:cdab", "fe80::3c38:b8ff:fe47:9c47", "fe80::f8b2:3fff:fe46:7cf7", "fe80::e43e:eeff:fe97:d44a", "fe80::24ca:86ff:fea4:7b4b", "fe80::8019:e4ff:feee:5db0", "fe80::a0b8:b6ff:fe57:c38f", "fe80::cca0:a4ff:febe:92f3", "fe80::e8ae:6dff:feb9:6c19", "fe80::2c56:62ff:fee5:fdd7", "fe80::e434:f9ff:fe7b:e194", "fe80::f8d4:d9ff:fee7:e627", "fe80::38ae:caff:fe47:63e3", "fe80::cc3:a9ff:fec0:fb13", "fe80::600b:72ff:feae:8286", "fe80::2470:3eff:fee0:3991", "fe80::4ce:4dff:fe93:1644", "fe80::201a:c0ff:feff:b7eb", "fe80::d8fb:ceff:feda:7cb9", "fe80::1c68:50ff:feca:e63c", "fe80::5829:e7ff:feac:9687", "fe80::250:56ff:fe8a:b61b", "fe80::ac2c:2eff:fe79:ebf4", "fe80::3c60:c6ff:fe9c:fcad", "fe80::84fd:8aff:fea3:3e8d"], "ansible_uptime_seconds": 10164936, "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "gather_subset": ["all"], "ansible_user_shell": "/bin/bash", "ansible_product_serial": "VMware-42 0a 8c 4a 34 5d f0 26-5a a4 fa a9 08 bb 81 b5", "ansible_form_factor": "Other", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_local": {"openshift": {"node": {"schedulable": "false", "labels": {"nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75"}, "proxy_mode": "iptables", "dns_ip": "172.29.80.173", "bootstrapped": false}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "master": {}, "common": {"system_images_registry": "registry.access.redhat.com", "etcd_runtime": "host", "is_etcd_system_container": false, "deployment_subtype": "basic", "is_master_system_container": false, "is_containerized": false, "is_node_system_container": false, "portal_net": "172.18.128.0/17", "generate_no_proxy_hosts": true, "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise"}, "cloudprovider": {}}}, "ansible_veth2a007116": {"macaddress": "26:70:3e:e0:39:91", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth2a007116", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::2470:3eff:fee0:3991"}], "active": true, "speed": 10000}, "ansible_veth1039f224": {"macaddress": "86:05:b3:ca:b3:74", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth1039f224", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8405:b3ff:feca:b374"}], "active": true, "speed": 10000}, "ansible_vxlan_sys_4789": {"macaddress": "ce:a0:a4:be:92:f3", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "off [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65000, "device": "vxlan_sys_4789", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::cca0:a4ff:febe:92f3"}], "active": true, "type": "ether"}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:9d:ee:21:bd", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "on", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": [], "id": "8000.02429dee21bd", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "active": false, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "2", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "3", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "4", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "5", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "6", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "7", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz"], "ansible_vethec61e085": {"macaddress": "86:fd:8a:a3:3e:8d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethec61e085", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::84fd:8aff:fea3:3e8d"}], "active": true, "speed": 10000}, "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFpxtB/yycphUJ2xKVXQhOqFkwnT42KguBgPtch5Z44yE5tGqxkZntFWbOX/ObDLoIaXYclMTxHD9tJc1XEOgYk=", "ansible_user_gid": 0, "ansible_system_vendor": "VMware, Inc.", "ansible_swaptotal_mb": 0, "ansible_vethf18422a4": {"macaddress": "92:45:8c:98:a7:06", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf18422a4", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::9045:8cff:fe98:a706"}], "active": true, "speed": 10000}, "ansible_distribution_major_version": "7", "ansible_veth427fce62": {"macaddress": "fa:8d:17:99:9f:36", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth427fce62", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f88d:17ff:fe99:9f36"}], "active": true, "speed": 10000}, "ansible_real_group_id": 0, "ansible_lsb": {"release": "7.5", "major_release": "7", "codename": "Maipo", "id": "RedHatEnterpriseServer", "description": "Red Hat Enterprise Linux Server release 7.5 (Maipo)"}, "ansible_tun0": {"macaddress": "06:ce:4d:93:16:44", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "tun0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.18.27.255", "netmask": "255.255.254.0", "network": "172.18.26.0", "address": "172.18.26.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4ce:4dff:fe93:1644"}], "active": true, "type": "ether"}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDfVl1miVaKrpeV3Vyucq8XLZX/rtPbZe10LCmgwU6CbtMMvX3L1BtcrAdwIZkmfelQgtIoxjZh5F0pbfR9bURz1kUjj+7Nu46Y2pOUKXCiTWzXOMyHkPF8D/vhQAbXcCBw3mPh+PWMAbtNs/G4sNBJgV4x09zBt17oLFiNh/UNw4zude8vDrUHtfzwriG2XzJZ2QgAOTUIRiHDrP1/NH3GiveVLgMEqzxketb/Lb4xswEEJziXKKUjb8Ccc6WALme3ToLyjMOtDkEcaHIy6FZ5msod3JOll0EyUaUfQQf6P/7pzNKbRl4seuiUBWXCmzxzbSWSC+UlKfxBrjyavkm1", "ansible_user_gecos": "root", "ansible_ens192": {"macaddress": "00:50:56:8a:b6:1b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "on", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:0b:00.0", "module": "vmxnet3", "mtu": 1500, "device": "ens192", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.29.80.255", "netmask": "255.255.255.0", "network": "172.29.80.0", "address": "172.29.80.173"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::250:56ff:fe8a:b61b"}], "active": true, "speed": 10000, "hw_timestamp_filters": []}, "ansible_processor_threads_per_core": 1, "ansible_system": "Linux", "ansible_all_ipv4_addresses": ["172.17.0.1", "172.18.26.1", "172.29.80.173"], "ansible_python_version": "2.7.5", "ansible_product_version": "None", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 15868, "used": 15525, "free": 343}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 3741, "free": 12127}}, "ansible_user_dir": "/root", "ansible_vethc7baa18d": {"macaddress": "5a:29:e7:ac:96:87", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc7baa18d", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::5829:e7ff:feac:9687"}], "active": true, "speed": 10000}, "ansible_veth263051d2": {"macaddress": "22:1a:c0:ff:b7:eb", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth263051d2", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::201a:c0ff:feff:b7eb"}], "active": true, "speed": 10000}, "ansible_veth5b38f112": {"macaddress": "3a:ae:ca:47:63:e3", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth5b38f112", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::38ae:caff:fe47:63e3"}], "active": true, "speed": 10000}, "ansible_real_user_id": 0, "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["172.29.80.173"], "search": ["cluster.local", "os.ad.scanplus.de"]}, "ansible_effective_group_id": 0, "ansible_veth356f1002": {"macaddress": "be:0d:f2:6a:cd:ab", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth356f1002", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::bc0d:f2ff:fe6a:cdab"}], "active": true, "speed": 10000}, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 15868, "ansible_device_links": {"masters": {"sdb1": ["dm-5"], "sda2": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "sdc": ["dm-4"]}, "labels": {"dm-2": ["lv_home"], "dm-3": ["lv_var"], "dm-1": ["lv_root"]}, "ids": {"sdb1": ["lvm-pv-uuid-RzcNmB-vUHy-w6KX-WL2Z-h3aA-u0oI-vA24eP"], "sr0": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "sda2": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "sdc": ["lvm-pv-uuid-S4m8t4-65KR-DgTp-JthZ-9pd5-WQlW-7oVgNN"], "dm-4": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "dm-5": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-2RelHB35A3F9IyZtZItDvqAgnFfZ0qOFfunHk9H54OoeJDl7cS0BSoYrEXfbUOWe"], "dm-2": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "dm-3": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "dm-0": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "dm-1": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"]}, "uuids": {"sda1": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"], "dm-4": ["fdb049bd-4891-444b-8645-be3db36d3a8c"], "dm-5": ["e942adfa-0d68-461f-9629-1d6aaf55e86b"], "dm-2": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"], "dm-3": ["00fdfc93-06a0-4049-a073-c5f715f53604"], "dm-0": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"], "dm-1": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_veth1ab73ec9": {"macaddress": "ae:2c:2e:79:eb:f4", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth1ab73ec9", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ac2c:2eff:fe79:ebf4"}], "active": true, "speed": 10000}, "ansible_veth28b6b2a9": {"macaddress": "26:ca:86:a4:7b:4b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth28b6b2a9", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::24ca:86ff:fea4:7b4b"}], "active": true, "speed": 10000}, "ansible_veth4c9ce397": {"macaddress": "e6:3e:ee:97:d4:4a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth4c9ce397", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e43e:eeff:fe97:d44a"}], "active": true, "speed": 10000}, "ansible_memfree_mb": 343, "ansible_veth8ece5fc5": {"macaddress": "2e:56:62:e5:fd:d7", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth8ece5fc5", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::2c56:62ff:fee5:fdd7"}], "active": true, "speed": 10000}, "ansible_processor_count": 8, "ansible_hostname": "sp-os-node12", "ansible_interfaces": ["veth263051d2", "veth7173b11b", "veth2a007116", "ovs-system", "tun0", "veth1039f224", "vethf91e07c4", "veth28b6b2a9", "veth71d28e37", "vethc7baa18d", "veth7fb4eb2d", "veth589d0f10", "vethc26439c6", "vethb1141b9e", "veth6d9304ce", "vxlan_sys_4789", "veth5b38f112", "vethf18422a4", "veth4c9ce397", "vethbcc3383e", "veth356f1002", "docker0", "br0", "veth427fce62", "vethec61e085", "veth8ece5fc5", "vethdc3caf54", "veth7f753d50", "veth364a8aef", "veth37406833", "veth2c8a43b4", "veth1ab73ec9", "lo", "ens192"], "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_fqdn": "sp-os-node12.os.ad.scanplus.de", "ansible_mounts": [{"block_used": 709063, "uuid": "e36965f9-43c4-4739-9fd6-48d5e91ae531", "size_total": 10434990080, "block_total": 2547605, "mount": "/", "block_available": 1838542, "size_available": 7530668032, "fstype": "ext4", "inode_total": 654080, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-root", "inode_used": 57356, "block_size": 4096, "inode_available": 596724}, {"block_used": 73745, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "size_total": 2046640128, "block_total": 499668, "mount": "/boot", "block_available": 425923, "size_available": 1744580608, "fstype": "ext4", "inode_total": 131072, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/sda1", "inode_used": 353, "block_size": 4096, "inode_available": 130719}, {"block_used": 59318, "uuid": "448b53b9-3193-40d6-a9e4-8eea58184ff3", "size_total": 4061331456, "block_total": 991536, "mount": "/home", "block_available": 932218, "size_available": 3818364928, "fstype": "ext4", "inode_total": 256000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-home", "inode_used": 32, "block_size": 4096, "inode_available": 255968}, {"block_used": 323321, "uuid": "00fdfc93-06a0-4049-a073-c5f715f53604", "size_total": 6208094208, "block_total": 1515648, "mount": "/var", "block_available": 1192327, "size_available": 4883771392, "fstype": "ext4", "inode_total": 389376, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var", "inode_used": 5887, "block_size": 4096, "inode_available": 383489}, {"block_used": 2497546, "uuid": "fdb049bd-4891-444b-8645-be3db36d3a8c", "size_total": 14726512640, "block_total": 3595340, "mount": "/var/log", "block_available": 1097794, "size_available": 4496564224, "fstype": "ext4", "inode_total": 896000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var_log", "inode_used": 1244, "block_size": 4096, "inode_available": 894756}, {"block_used": 5593503, "uuid": "e942adfa-0d68-461f-9629-1d6aaf55e86b", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker", "block_available": 2265954, "size_available": 9281347584, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 553995, "block_size": 4096, "inode_available": 15172597}, {"block_used": 5593503, "uuid": "e942adfa-0d68-461f-9629-1d6aaf55e86b", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/containers", "block_available": 2265954, "size_available": 9281347584, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 553995, "block_size": 4096, "inode_available": 15172597}, {"block_used": 5593503, "uuid": "e942adfa-0d68-461f-9629-1d6aaf55e86b", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/overlay2", "block_available": 2265954, "size_available": 9281347584, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 553995, "block_size": 4096, "inode_available": 15172597}, {"block_used": 1606, "uuid": "N/A", "size_total": 5150605312, "block_total": 9824, "mount": "/var/lib/origin/openshift.local.volumes/pods/c42ea210-13ec-11e9-9ffb-005056aa3492/volumes/kubernetes.io~nfs/cisco-callactions", "block_available": 8218, "size_available": 4308598784, "fstype": "nfs4", "inode_total": 327680, "options": "rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.29.80.173,local_lock=none,addr=172.30.80.182", "device": "172.30.80.182:/nfs/exports/cisco-callactions", "inode_used": 122, "block_size": 524288, "inode_available": 327558}], "ansible_nodename": "sp-os-node12.os.ad.scanplus.de", "ansible_distribution_file_search_string": "Red Hat", "ansible_veth7173b11b": {"macaddress": "3e:60:c6:9c:fc:ad", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth7173b11b", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3c60:c6ff:fe9c:fcad"}], "active": true, "speed": 10000}, "ansible_veth37406833": {"macaddress": "82:19:e4:ee:5d:b0", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth37406833", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8019:e4ff:feee:5db0"}], "active": true, "speed": 10000}, "ansible_lvm": {"pvs": {"/dev/sdb1": {"free_g": "0", "size_g": "30.00", "vg": "vg0-docker"}, "/dev/sdc": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sda2": {"free_g": "0", "size_g": "28.00", "vg": "vg01"}}, "lvs": {"swap": {"size_g": "4.00", "vg": "vg01"}, "dockerlv": {"size_g": "30.00", "vg": "vg0-docker"}, "var_log": {"size_g": "13.99", "vg": "vg01"}, "var": {"size_g": "6.00", "vg": "vg01"}, "home": {"size_g": "4.00", "vg": "vg01"}, "root": {"size_g": "10.00", "vg": "vg01"}}, "vgs": {"vg01": {"free_g": "0", "size_g": "37.99", "num_lvs": "5", "num_pvs": "2"}, "vg0-docker": {"free_g": "0", "size_g": "30.00", "num_lvs": "1", "num_pvs": "1"}}}, "ansible_domain": "os.ad.scanplus.de", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIEK5Esa5v9wIRIE3eUSdTGr6kN0weF33b77+EckXVYdO", "ansible_processor_cores": 1, "ansible_bios_version": "6.00", "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20190109T153929", "tz": "CET", "weeknumber": "01", "hour": "15", "year": "2019", "minute": "39", "tz_offset": "+0100", "month": "01", "epoch": "1547044769", "iso8601_micro": "2019-01-09T14:39:29.550852Z", "weekday": "Wednesday", "time": "15:39:29", "date": "2019-01-09", "iso8601": "2019-01-09T14:39:29Z", "day": "09", "iso8601_basic": "20190109T153929550770", "second": "29"}, "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_veth6d9304ce": {"macaddress": "fa:d4:d9:e7:e6:27", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth6d9304ce", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f8d4:d9ff:fee7:e627"}], "active": true, "speed": 10000}, "ansible_effective_user_id": 0, "ansible_veth7f753d50": {"macaddress": "1e:68:50:ca:e6:3c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth7f753d50", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1c68:50ff:feca:e63c"}], "active": true, "speed": 10000}, "ansible_product_name": "VMware Virtual Platform", "ansible_devices": {"sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "NECVMWar", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: VMware SATA AHCI controller", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "VMware SATA CD00", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "labels": [], "ids": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "uuids": []}, "sectors": "58718208", "start": "4196352", "holders": ["vg01-swap", "vg01-root", "vg01-home", "vg01-var", "vg01-var_log"], "size": "28.00 GB"}, "sda1": {"sectorsize": 512, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"]}, "sectors": "4194304", "start": "2048", "holders": [], "size": "2.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdb": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdb1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-5"], "labels": [], "ids": ["lvm-pv-uuid-RzcNmB-vUHy-w6KX-WL2Z-h3aA-u0oI-vA24eP"], "uuids": []}, "sectors": "62912512", "start": "2048", "holders": ["vg0--docker-dockerlv"], "size": "30.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdc": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-4"], "labels": [], "ids": ["lvm-pv-uuid-S4m8t4-65KR-DgTp-JthZ-9pd5-WQlW-7oVgNN"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "10.00 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "29343744", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "uuids": ["fdb049bd-4891-444b-8645-be3db36d3a8c"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "13.99 GB"}, "dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62906368", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-2RelHB35A3F9IyZtZItDvqAgnFfZ0qOFfunHk9H54OoeJDl7cS0BSoYrEXfbUOWe"], "uuids": ["e942adfa-0d68-461f-9629-1d6aaf55e86b"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "30.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8380416", "links": {"masters": [], "labels": ["lv_home"], "ids": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "uuids": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "12582912", "links": {"masters": [], "labels": ["lv_var"], "ids": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "uuids": ["00fdfc93-06a0-4049-a073-c5f715f53604"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "6.00 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8396800", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "uuids": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": ["lv_root"], "ids": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"], "uuids": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}}, "ansible_user_uid": 0, "ansible_vethbcc3383e": {"macaddress": "e6:34:f9:7b:e1:94", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethbcc3383e", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e434:f9ff:fe7b:e194"}], "active": true, "speed": 10000}, "ansible_veth71d28e37": {"macaddress": "fa:b2:3f:46:7c:f7", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth71d28e37", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f8b2:3fff:fe46:7cf7"}], "active": true, "speed": 10000}, "ansible_bios_date": "04/05/2016", "ansible_vethdc3caf54": {"macaddress": "a2:b8:b6:57:c3:8f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethdc3caf54", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a0b8:b6ff:fe57:c38f"}], "active": true, "speed": 10000}, "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_vethb1141b9e": {"macaddress": "62:0b:72:ae:82:86", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethb1141b9e", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::600b:72ff:feae:8286"}], "active": true, "speed": 10000}, "ansible_br0": {"macaddress": "22:a7:c5:36:42:49", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "br0", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}}}\n', '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nUSE OF THIS COMPUTER SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS SYSTEM.\nUNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION.\nEVIDENCE OF UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE, CRIMINAL, OR OTHER ADVERSE ACTION.\nUSE OF THIS SYSTEM CONSTITUTES CONSENT TO MONITORING FOR THESE PURPOSES.\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n') ok: [sp-os-node03.os.ad.scanplus.de] (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"module_setup": true, "ansible_distribution_version": "7.5", "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LESSOPEN": "||/usr/bin/lesspipe.sh %s", "SSH_CLIENT": "172.30.80.240 58002 22", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "root", "USER": "root", "QTDIR": "/usr/lib64/qt-3.3", "PATH": "/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "LANG": "en_US.UTF-8", "QTLIB": "/usr/lib64/qt-3.3/lib", "SHELL": "/bin/bash", "QTINC": "/usr/lib64/qt-3.3/include", "HOME": "/root", "XDG_RUNTIME_DIR": "/run/user/0", "SELINUX_ROLE_REQUESTED": "", "QT_GRAPHICSSYSTEM_CHECKED": "1", "XDG_SESSION_ID": "26333", "_": "/usr/bin/python", "SELINUX_LEVEL_REQUESTED": "", "SHLVL": "2", "PWD": "/root", "MAIL": "/var/mail/root", "SSH_CONNECTION": "172.30.80.240 58002 172.29.80.171 22"}, "ansible_veth3a7badb1": {"macaddress": "5a:e7:d7:3f:0d:91", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth3a7badb1", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::58e7:d7ff:fe3f:d91"}], "active": true, "speed": 10000}, "ansible_veth2db6875f": {"macaddress": "9a:a1:a0:24:8d:75", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth2db6875f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::98a1:a0ff:fe24:8d75"}], "active": true, "speed": 10000}, "ansible_veth929c4f0d": {"macaddress": "76:e8:51:52:34:f3", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth929c4f0d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::74e8:51ff:fe52:34f3"}], "active": true, "speed": 10000}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "00:50:56:8a:58:d0", "network": "172.29.80.0", "mtu": 1500, "broadcast": "172.29.80.255", "alias": "ens192", "netmask": "255.255.255.0", "address": "172.29.80.171", "interface": "ens192", "type": "ether", "gateway": "172.29.80.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-862.11.6.el7.x86_64", "quiet": true, "vconsole.font": "latarcyrheb-sun16", "rhgb": true, "rd.lvm.lv": "vg01/root", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/vg01-root", "vconsole.keymap": "de"}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_ovs_system": {"macaddress": "36:72:fb:f9:14:6c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1500, "device": "ovs-system", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "420A2897-ACF0-7164-7E1B-287C3D5CBEB8", "ansible_pkg_mgr": "yum", "ansible_service_mgr": "systemd", "ansible_distribution": "RedHat", "ansible_vethe5bb0c37": {"macaddress": "0a:4b:94:62:3e:0a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethe5bb0c37", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::84b:94ff:fe62:3e0a"}], "active": true, "speed": 10000}, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:8d342ed3a021", "ansible_all_ipv6_addresses": ["fe80::70e8:75ff:fe73:5560", "fe80::7cfb:84ff:fec2:3721", "fe80::d019:fcff:fef7:5a15", "fe80::74e8:51ff:fe52:34f3", "fe80::80eb:61ff:feaf:dcfd", "fe80::7465:39ff:fe1a:fb04", "fe80::c496:b9ff:fe04:d35d", "fe80::58e7:d7ff:fe3f:d91", "fe80::6437:46ff:fed8:899d", "fe80::8426:9eff:fe61:393b", "fe80::4406:edff:fef4:9456", "fe80::e893:cdff:fe3f:c660", "fe80::58b4:73ff:fe2c:68f9", "fe80::a061:b5ff:fe5c:244e", "fe80::1435:f8ff:fed8:2c57", "fe80::9411:efff:fef6:6826", "fe80::803d:a3ff:fef5:3352", "fe80::456:9dff:fee2:af4f", "fe80::fcf7:91ff:fea0:df18", "fe80::e0ef:dfff:fe66:cef6", "fe80::250:56ff:fe8a:58d0", "fe80::b4e1:7fff:fea2:c73d", "fe80::e894:2bff:fed8:fae", "fe80::84b:94ff:fe62:3e0a", "fe80::98a1:a0ff:fe24:8d75", "fe80::4818:b0ff:fed5:6896"], "ansible_uptime_seconds": 6245778, "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_user_shell": "/bin/bash", "ansible_vethe6c58ed3": {"macaddress": "82:3d:a3:f5:33:52", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethe6c58ed3", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::803d:a3ff:fef5:3352"}], "active": true, "speed": 10000}, "ansible_product_serial": "VMware-42 0a 28 97 ac f0 71 64-7e 1b 28 7c 3d 5c be b8", "ansible_form_factor": "Other", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_veth22c49bd7": {"macaddress": "a2:61:b5:5c:24:4e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth22c49bd7", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a061:b5ff:fe5c:244e"}], "active": true, "speed": 10000}, "ansible_local": {"openshift": {"node": {"schedulable": "false", "labels": {"nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75"}, "proxy_mode": "iptables", "dns_ip": "172.29.80.171", "bootstrapped": false}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "master": {}, "common": {"system_images_registry": "registry.access.redhat.com", "etcd_runtime": "host", "is_etcd_system_container": false, "deployment_subtype": "basic", "is_master_system_container": false, "is_containerized": false, "is_node_system_container": false, "portal_net": "172.18.128.0/17", "generate_no_proxy_hosts": true, "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise"}, "cloudprovider": {}}}, "ansible_vethf36a71a7": {"macaddress": "ea:93:cd:3f:c6:60", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf36a71a7", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e893:cdff:fe3f:c660"}], "active": true, "speed": 10000}, "ansible_vxlan_sys_4789": {"macaddress": "66:37:46:d8:89:9d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "off [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65000, "device": "vxlan_sys_4789", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::6437:46ff:fed8:899d"}], "active": true, "type": "ether"}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:25:de:d6:4b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "on", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": [], "id": "8000.024225ded64b", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "active": false, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_veth4e526a03": {"macaddress": "96:11:ef:f6:68:26", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth4e526a03", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::9411:efff:fef6:6826"}], "active": true, "speed": 10000}, "ansible_veth34530a63": {"macaddress": "4a:18:b0:d5:68:96", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth34530a63", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4818:b0ff:fed5:6896"}], "active": true, "speed": 10000}, "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK0GUn99qZiMbICsZn1kVI56R12tmqz0l5n0U2wg9EIKTiocsYR7/xkA+jUmZnA9/xnvbCRma18goK+zl/aPMu4=", "ansible_mounts": [{"block_used": 714313, "uuid": "e36965f9-43c4-4739-9fd6-48d5e91ae531", "size_total": 10434990080, "block_total": 2547605, "mount": "/", "block_available": 1833292, "size_available": 7509164032, "fstype": "ext4", "inode_total": 654080, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-root", "inode_used": 57359, "block_size": 4096, "inode_available": 596721}, {"block_used": 73746, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "size_total": 2046640128, "block_total": 499668, "mount": "/boot", "block_available": 425922, "size_available": 1744576512, "fstype": "ext4", "inode_total": 131072, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/sda1", "inode_used": 353, "block_size": 4096, "inode_available": 130719}, {"block_used": 59318, "uuid": "448b53b9-3193-40d6-a9e4-8eea58184ff3", "size_total": 4061331456, "block_total": 991536, "mount": "/home", "block_available": 932218, "size_available": 3818364928, "fstype": "ext4", "inode_total": 256000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-home", "inode_used": 32, "block_size": 4096, "inode_available": 255968}, {"block_used": 309808, "uuid": "00fdfc93-06a0-4049-a073-c5f715f53604", "size_total": 6208094208, "block_total": 1515648, "mount": "/var", "block_available": 1205840, "size_available": 4939120640, "fstype": "ext4", "inode_total": 389376, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var", "inode_used": 4738, "block_size": 4096, "inode_available": 384638}, {"block_used": 2317777, "uuid": "fdb049bd-4891-444b-8645-be3db36d3a8c", "size_total": 14726512640, "block_total": 3595340, "mount": "/var/log", "block_available": 1277563, "size_available": 5232898048, "fstype": "ext4", "inode_total": 896000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var_log", "inode_used": 1295, "block_size": 4096, "inode_available": 894705}, {"block_used": 4375730, "uuid": "1ffd30ae-5174-4e60-b01e-f19ddcf2bba9", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker", "block_available": 3483727, "size_available": 14269345792, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 557344, "block_size": 4096, "inode_available": 15169248}, {"block_used": 4375730, "uuid": "1ffd30ae-5174-4e60-b01e-f19ddcf2bba9", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/containers", "block_available": 3483727, "size_available": 14269345792, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 557344, "block_size": 4096, "inode_available": 15169248}, {"block_used": 4375730, "uuid": "1ffd30ae-5174-4e60-b01e-f19ddcf2bba9", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/overlay2", "block_available": 3483727, "size_available": 14269345792, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 557344, "block_size": 4096, "inode_available": 15169248}, {"block_used": 566742, "uuid": "N/A", "size_total": 538869497856, "block_total": 2055624, "mount": "/var/lib/origin/openshift.local.volumes/pods/9c64f1c1-b7a6-11e8-8f0c-005056aa3492/volumes/kubernetes.io~nfs/pv-scanplus-netbox-prod-static", "block_available": 1488882, "size_available": 390301483008, "fstype": "nfs4", "inode_total": 33423360, "options": "rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.29.80.171,local_lock=none,addr=172.30.80.251", "device": "172.30.80.251:/exports/netbox/prod/static", "inode_used": 144610, "block_size": 262144, "inode_available": 33278750}, {"block_used": 566742, "uuid": "N/A", "size_total": 538869497856, "block_total": 2055624, "mount": "/var/lib/origin/openshift.local.volumes/pods/6a765ef9-03a8-11e9-b7d6-005056aa3492/volumes/kubernetes.io~nfs/pv-scanplus-netbox-prod-static", "block_available": 1488882, "size_available": 390301483008, "fstype": "nfs4", "inode_total": 33423360, "options": "rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.29.80.171,local_lock=none,addr=172.30.80.251", "device": "172.30.80.251:/exports/netbox/prod/static", "inode_used": 144610, "block_size": 262144, "inode_available": 33278750}, {"block_used": 566742, "uuid": "N/A", "size_total": 538869497856, "block_total": 2055624, "mount": "/var/lib/origin/openshift.local.volumes/pods/6a765ef9-03a8-11e9-b7d6-005056aa3492/volumes/kubernetes.io~nfs/pv-scanplus-netbox-prod-media", "block_available": 1488882, "size_available": 390301483008, "fstype": "nfs4", "inode_total": 33423360, "options": "rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.29.80.171,local_lock=none,addr=172.30.80.251", "device": "172.30.80.251:/exports/netbox/prod/media", "inode_used": 144610, "block_size": 262144, "inode_available": 33278750}], "ansible_system_vendor": "VMware, Inc.", "ansible_veth29904d76": {"macaddress": "c6:96:b9:04:d3:5d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth29904d76", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c496:b9ff:fe04:d35d"}], "active": true, "speed": 10000}, "ansible_vethba45d8e6": {"macaddress": "ea:94:2b:d8:0f:ae", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethba45d8e6", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e894:2bff:fed8:fae"}], "active": true, "speed": 10000}, "ansible_swaptotal_mb": 0, "ansible_veth16907316": {"macaddress": "b6:e1:7f:a2:c7:3d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth16907316", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b4e1:7fff:fea2:c73d"}], "active": true, "speed": 10000}, "ansible_distribution_major_version": "7", "ansible_real_group_id": 0, "ansible_lsb": {"release": "7.5", "major_release": "7", "codename": "Maipo", "id": "RedHatEnterpriseServer", "description": "Red Hat Enterprise Linux Server release 7.5 (Maipo)"}, "ansible_tun0": {"macaddress": "46:06:ed:f4:94:56", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "tun0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.18.25.255", "netmask": "255.255.254.0", "network": "172.18.24.0", "address": "172.18.24.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4406:edff:fef4:9456"}], "active": true, "type": "ether"}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC5MKekQ0Rs0PWZlaZIXOMhA5kN57CyKl0EpzFDSLGksNNRQdZBT0rhFw7xdyZpziJvdcRc3CGBGYd7d2QR/06qIO9eItI3k63CyAwykyTrxbOEkL0V+vnrzSZoHH1SL8Cs4jbPJRYZLKln0U+cXHJAhrMPdPXM1cX04nUc1XqMLWOryWrgbGmlE83K6TiubXKU4tffDvBgEPsQ54cLkvdXIdlVlPKOAzXCjNN+xFOFxbIayWYWWEPKf8+KqSunzPBhJxjhhjcYhB3cdjigC99eEKnZ/zj1j0TPHuUDiDYO6Si93coq8frcd+lzz5mhRESrqdMo8AowE7GbEdYB3ZOn", "ansible_user_gecos": "root", "ansible_ens192": {"macaddress": "00:50:56:8a:58:d0", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "on", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:0b:00.0", "module": "vmxnet3", "mtu": 1500, "device": "ens192", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.29.80.255", "netmask": "255.255.255.0", "network": "172.29.80.0", "address": "172.29.80.171"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::250:56ff:fe8a:58d0"}], "active": true, "speed": 10000, "hw_timestamp_filters": []}, "ansible_processor_threads_per_core": 1, "ansible_vetheda5f093": {"macaddress": "16:35:f8:d8:2c:57", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetheda5f093", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1435:f8ff:fed8:2c57"}], "active": true, "speed": 10000}, "ansible_system": "Linux", "ansible_all_ipv4_addresses": ["172.17.0.1", "172.18.24.1", "172.29.80.171"], "ansible_python_version": "2.7.5", "ansible_product_version": "None", "ansible_veth2bb809a4": {"macaddress": "5a:b4:73:2c:68:f9", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth2bb809a4", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::58b4:73ff:fe2c:68f9"}], "active": true, "speed": 10000}, "ansible_memory_mb": {"real": {"total": 15868, "used": 15364, "free": 504}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 3883, "free": 11985}}, "ansible_user_dir": "/root", "gather_subset": ["all"], "ansible_real_user_id": 0, "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["172.29.80.171"], "search": ["cluster.local", "os.ad.scanplus.de"]}, "ansible_effective_group_id": 0, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 15868, "ansible_vethdb9c7a84": {"macaddress": "d2:19:fc:f7:5a:15", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethdb9c7a84", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d019:fcff:fef7:5a15"}], "active": true, "speed": 10000}, "ansible_device_links": {"masters": {"sdb1": ["dm-5"], "sda2": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "sdc": ["dm-4"]}, "labels": {"dm-2": ["lv_home"], "dm-3": ["lv_var"], "dm-1": ["lv_root"]}, "ids": {"sdb1": ["lvm-pv-uuid-1LEphM-cKvf-Uw1J-DUyC-kP5R-WBYx-ZB2VJV"], "sr0": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "sda2": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "sdc": ["lvm-pv-uuid-DjP6rh-Dj9x-1dfg-ZD3X-gOf8-h87e-ffIbKE"], "dm-4": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "dm-5": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-PxtimdsnuGvbGJC7DsyRsWL5pd5Q24Sq337CW8kVFSGcTBw4AezPeAbsAtDdOrfn"], "dm-2": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "dm-3": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "dm-0": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "dm-1": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"]}, "uuids": {"sda1": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"], "dm-4": ["fdb049bd-4891-444b-8645-be3db36d3a8c"], "dm-5": ["1ffd30ae-5174-4e60-b01e-f19ddcf2bba9"], "dm-2": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"], "dm-3": ["00fdfc93-06a0-4049-a073-c5f715f53604"], "dm-0": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"], "dm-1": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}}, "ansible_veth94fd7847": {"macaddress": "72:e8:75:73:55:60", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth94fd7847", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::70e8:75ff:fe73:5560"}], "active": true, "speed": 10000}, "ansible_apparmor": {"status": "disabled"}, "ansible_memfree_mb": 504, "ansible_processor_count": 8, "ansible_hostname": "sp-os-node10", "ansible_vethcaf8d535": {"macaddress": "82:eb:61:af:dc:fd", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethcaf8d535", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::80eb:61ff:feaf:dcfd"}], "active": true, "speed": 10000}, "ansible_interfaces": ["vetheda5f093", "veth3a7badb1", "ovs-system", "tun0", "vethba45d8e6", "veth2bb809a4", "veth0daddc9f", "vethf9ef8bf5", "lo", "vxlan_sys_4789", "vethdb9c7a84", "veth94fd7847", "veth34530a63", "vethf36a71a7", "veth22c49bd7", "docker0", "veth2db6875f", "br0", "veth4e526a03", "vethe5bb0c37", "vethcaf8d535", "veth927a4541", "veth43ae7053", "vethe6c58ed3", "vethce2a538b", "veth929c4f0d", "veth537898ed", "veth29904d76", "ens192", "veth16907316"], "ansible_machine_id": "d768f1f16c8043df9d09ccf8ab47a75c", "ansible_fqdn": "sp-os-node10.os.ad.scanplus.de", "ansible_user_gid": 0, "ansible_nodename": "sp-os-node10.os.ad.scanplus.de", "ansible_vethf9ef8bf5": {"macaddress": "76:65:39:1a:fb:04", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf9ef8bf5", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7465:39ff:fe1a:fb04"}], "active": true, "speed": 10000}, "ansible_distribution_file_search_string": "Red Hat", "ansible_vethce2a538b": {"macaddress": "e2:ef:df:66:ce:f6", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethce2a538b", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e0ef:dfff:fe66:cef6"}], "active": true, "speed": 10000}, "ansible_lvm": {"pvs": {"/dev/sdb1": {"free_g": "0", "size_g": "30.00", "vg": "vg0-docker"}, "/dev/sdc": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sda2": {"free_g": "0", "size_g": "28.00", "vg": "vg01"}}, "lvs": {"swap": {"size_g": "4.00", "vg": "vg01"}, "dockerlv": {"size_g": "30.00", "vg": "vg0-docker"}, "var_log": {"size_g": "13.99", "vg": "vg01"}, "var": {"size_g": "6.00", "vg": "vg01"}, "home": {"size_g": "4.00", "vg": "vg01"}, "root": {"size_g": "10.00", "vg": "vg01"}}, "vgs": {"vg01": {"free_g": "0", "size_g": "37.99", "num_lvs": "5", "num_pvs": "2"}, "vg0-docker": {"free_g": "0", "size_g": "30.00", "num_lvs": "1", "num_pvs": "1"}}}, "ansible_domain": "os.ad.scanplus.de", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIBImRWem0UunoZ9N03Pnae8f7loqzPz6ciTo6HIuZG3O", "ansible_processor_cores": 1, "ansible_veth0daddc9f": {"macaddress": "86:26:9e:61:39:3b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth0daddc9f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8426:9eff:fe61:393b"}], "active": true, "speed": 10000}, "ansible_bios_version": "6.00", "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20190109T153929", "tz": "CET", "weeknumber": "01", "hour": "15", "year": "2019", "minute": "39", "tz_offset": "+0100", "month": "01", "epoch": "1547044769", "iso8601_micro": "2019-01-09T14:39:29.618288Z", "weekday": "Wednesday", "time": "15:39:29", "date": "2019-01-09", "iso8601": "2019-01-09T14:39:29Z", "day": "09", "iso8601_basic": "20190109T153929618204", "second": "29"}, "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_veth927a4541": {"macaddress": "7e:fb:84:c2:37:21", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth927a4541", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7cfb:84ff:fec2:3721"}], "active": true, "speed": 10000}, "ansible_product_name": "VMware Virtual Platform", "ansible_devices": {"sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "NECVMWar", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: VMware SATA AHCI controller", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "VMware SATA CD00", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "labels": [], "ids": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "uuids": []}, "sectors": "58718208", "start": "4196352", "holders": ["vg01-swap", "vg01-root", "vg01-home", "vg01-var", "vg01-var_log"], "size": "28.00 GB"}, "sda1": {"sectorsize": 512, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"]}, "sectors": "4194304", "start": "2048", "holders": [], "size": "2.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdb": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdb1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-5"], "labels": [], "ids": ["lvm-pv-uuid-1LEphM-cKvf-Uw1J-DUyC-kP5R-WBYx-ZB2VJV"], "uuids": []}, "sectors": "62912512", "start": "2048", "holders": ["vg0--docker-dockerlv"], "size": "30.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdc": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-4"], "labels": [], "ids": ["lvm-pv-uuid-DjP6rh-Dj9x-1dfg-ZD3X-gOf8-h87e-ffIbKE"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "10.00 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "29343744", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "uuids": ["fdb049bd-4891-444b-8645-be3db36d3a8c"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "13.99 GB"}, "dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62906368", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-PxtimdsnuGvbGJC7DsyRsWL5pd5Q24Sq337CW8kVFSGcTBw4AezPeAbsAtDdOrfn"], "uuids": ["1ffd30ae-5174-4e60-b01e-f19ddcf2bba9"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "30.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8380416", "links": {"masters": [], "labels": ["lv_home"], "ids": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "uuids": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "12582912", "links": {"masters": [], "labels": ["lv_var"], "ids": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "uuids": ["00fdfc93-06a0-4049-a073-c5f715f53604"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "6.00 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8396800", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "uuids": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": ["lv_root"], "ids": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"], "uuids": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}}, "ansible_user_uid": 0, "ansible_bios_date": "04/05/2016", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "2", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "3", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "4", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "5", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "6", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz", "7", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz"], "ansible_veth43ae7053": {"macaddress": "fe:f7:91:a0:df:18", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth43ae7053", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fcf7:91ff:fea0:df18"}], "active": true, "speed": 10000}, "ansible_veth537898ed": {"macaddress": "06:56:9d:e2:af:4f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth537898ed", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::456:9dff:fee2:af4f"}], "active": true, "speed": 10000}, "ansible_br0": {"macaddress": "62:29:82:3a:f7:40", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "br0", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}}}\n', '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nUSE OF THIS COMPUTER SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS SYSTEM.\nUNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION.\nEVIDENCE OF UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE, CRIMINAL, OR OTHER ADVERSE ACTION.\nUSE OF THIS SYSTEM CONSTITUTES CONSENT TO MONITORING FOR THESE PURPOSES.\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n') (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_vethba48cd1e": {"macaddress": "12:77:bf:fa:60:56", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethba48cd1e", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1077:bfff:fefa:6056"}], "active": true, "speed": 10000}, "module_setup": true, "ansible_vethea3aab97": {"macaddress": "26:df:4e:b1:71:79", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethea3aab97", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::24df:4eff:feb1:7179"}], "active": true, "speed": 10000}, "ansible_distribution_version": "7.5", "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LESSOPEN": "||/usr/bin/lesspipe.sh %s", "SSH_CLIENT": "172.30.80.240 37122 22", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "root", "USER": "root", "QTDIR": "/usr/lib64/qt-3.3", "PATH": "/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "LANG": "en_US.UTF-8", "QTLIB": "/usr/lib64/qt-3.3/lib", "SHELL": "/bin/bash", "QTINC": "/usr/lib64/qt-3.3/include", "HOME": "/root", "XDG_RUNTIME_DIR": "/run/user/0", "SELINUX_ROLE_REQUESTED": "", "QT_GRAPHICSSYSTEM_CHECKED": "1", "XDG_SESSION_ID": "22401", "_": "/usr/bin/python", "SELINUX_LEVEL_REQUESTED": "", "SHLVL": "2", "PWD": "/root", "MAIL": "/var/mail/root", "SSH_CONNECTION": "172.30.80.240 37122 172.30.81.89 22"}, "ansible_vethfbc701ec": {"macaddress": "3a:e5:01:e4:c6:c5", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethfbc701ec", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::38e5:1ff:fee4:c6c5"}], "active": true, "speed": 10000}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "00:50:56:aa:18:20", "network": "172.30.81.0", "mtu": 1500, "broadcast": "172.30.81.255", "alias": "ens192", "netmask": "255.255.255.0", "address": "172.30.81.89", "interface": "ens192", "type": "ether", "gateway": "172.30.81.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_veth7cf5cfef": {"macaddress": "0e:92:57:46:ec:3d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth7cf5cfef", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c92:57ff:fe46:ec3d"}], "active": true, "speed": 10000}, "ansible_vethaf4a6ae0": {"macaddress": "3e:54:e0:f2:8f:18", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethaf4a6ae0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3c54:e0ff:fef2:8f18"}], "active": true, "speed": 10000}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-862.11.6.el7.x86_64", "quiet": true, "vconsole.font": "latarcyrheb-sun16", "rhgb": true, "rd.lvm.lv": "vg01/root", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/vg01-root", "vconsole.keymap": "de"}, "ansible_veth6a9e5018": {"macaddress": "fe:8f:55:62:b0:60", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth6a9e5018", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fc8f:55ff:fe62:b060"}], "active": true, "speed": 10000}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_ovs_system": {"macaddress": "32:cd:fb:a9:ba:1f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1500, "device": "ovs-system", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "422AF2F5-6B74-B0B2-5D2E-B8FBEBB504B0", "ansible_vethc346da89": {"macaddress": "5e:57:35:d3:7f:0b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc346da89", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::5c57:35ff:fed3:7f0b"}], "active": true, "speed": 10000}, "ansible_veth7b0435da": {"macaddress": "22:d4:d2:11:36:4a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth7b0435da", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::20d4:d2ff:fe11:364a"}], "active": true, "speed": 10000}, "ansible_pkg_mgr": "yum", "ansible_distribution": "RedHat", "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:59a2dc85ab7", "ansible_veth35a42e43": {"macaddress": "76:20:12:a2:6d:64", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth35a42e43", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7420:12ff:fea2:6d64"}], "active": true, "speed": 10000}, "ansible_all_ipv6_addresses": ["fe80::3c36:c3ff:fe62:b44f", "fe80::f863:f1ff:fe2c:1965", "fe80::d896:39ff:fecb:8c90", "fe80::b06a:6ff:fed7:d230", "fe80::9c3f:ff:feaf:926a", "fe80::34c7:3aff:fe2d:4690", "fe80::3826:65ff:fe58:78b2", "fe80::a84b:ddff:fe0e:9923", "fe80::d432:a4ff:fea5:c27c", "fe80::3c6e:19ff:fe48:1649", "fe80::8c11:a3ff:fedd:74d1", "fe80::600e:6dff:fe7c:1259", "fe80::b403:6dff:fe50:be15", "fe80::b4f2:68ff:fed6:5df3", "fe80::74e3:84ff:fecb:bda4", "fe80::fc8f:55ff:fe62:b060", "fe80::88ef:63ff:fecf:bc5d", "fe80::907f:88ff:fee1:9497", "fe80::7806:54ff:fe02:2dd6", "fe80::8cf0:b3ff:fe3b:f72a", "fe80::4036:e7ff:fea1:f4ff", "fe80::5c57:35ff:fed3:7f0b", "fe80::b05a:a2ff:fe4a:4bb", "fe80::41b:96ff:feb7:f7ca", "fe80::c92:57ff:fe46:ec3d", "fe80::c41b:f0ff:fe8e:e5e", "fe80::b4b9:4bff:fe33:c6db", "fe80::7420:12ff:fea2:6d64", "fe80::548b:89ff:feec:b288", "fe80::8ca3:14ff:fef0:4bd9", "fe80::24df:4eff:feb1:7179", "fe80::8048:55ff:fe4d:321", "fe80::5076:48ff:fe3d:66d", "fe80::38e5:1ff:fee4:c6c5", "fe80::c8ad:6ff:fea4:c648", "fe80::3c54:e0ff:fef2:8f18", "fe80::250:56ff:feaa:1820", "fe80::5cb5:9cff:fe7a:71af", "fe80::1077:bfff:fefa:6056", "fe80::e0f3:29ff:fea8:4d24", "fe80::9c90:24ff:fe45:1392", "fe80::f84a:70ff:fec6:a433", "fe80::20d4:d2ff:fe11:364a", "fe80::683f:c2ff:fe9d:25d1"], "ansible_uptime_seconds": 4251296, "ansible_veth55f62167": {"macaddress": "e2:f3:29:a8:4d:24", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth55f62167", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e0f3:29ff:fea8:4d24"}], "active": true, "speed": 10000}, "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", "ansible_veth4e771f2c": {"macaddress": "52:76:48:3d:06:6d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth4e771f2c", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::5076:48ff:fe3d:66d"}], "active": true, "speed": 10000}, "ansible_veth9ba652fe": {"macaddress": "06:1b:96:b7:f7:ca", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth9ba652fe", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::41b:96ff:feb7:f7ca"}], "active": true, "speed": 10000}, "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_veth939dbfd8": {"macaddress": "8e:11:a3:dd:74:d1", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth939dbfd8", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8c11:a3ff:fedd:74d1"}], "active": true, "speed": 10000}, "ansible_product_serial": "VMware-42 2a f2 f5 6b 74 b0 b2-5d 2e b8 fb eb b5 04 b0", "ansible_form_factor": "Other", "ansible_os_family": "RedHat", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_veth49536b9e": {"macaddress": "fa:63:f1:2c:19:65", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth49536b9e", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f863:f1ff:fe2c:1965"}], "active": true, "speed": 10000}, "ansible_system_capabilities_enforced": "True", "ansible_local": {"openshift": {"node": {"schedulable": "false", "labels": {"nodeusage": "dev", "region": "primary", "zone": "RZ-LM07"}, "proxy_mode": "iptables", "dns_ip": "172.30.81.89", "bootstrapped": false}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "master": {}, "common": {"portal_net": "172.18.128.0/17", "etcd_runtime": "host", "is_etcd_system_container": false, "deployment_subtype": "basic", "is_master_system_container": false, "is_containerized": false, "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise"}, "cloudprovider": {}}}, "ansible_vxlan_sys_4789": {"macaddress": "92:7f:88:e1:94:97", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "off [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65000, "device": "vxlan_sys_4789", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::907f:88ff:fee1:9497"}], "active": true, "type": "ether"}, "ansible_veth5eda38f9": {"macaddress": "fa:4a:70:c6:a4:33", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth5eda38f9", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f84a:70ff:fec6:a433"}], "active": true, "speed": 10000}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:c5:e6:c3:4c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "on", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": [], "id": "8000.0242c5e6c34c", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "active": false, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "2", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "3", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "4", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "5", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "6", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "7", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNX24NC2DypH/tOVj4yr5YAQYFNfqDbqQgmpslwKA6k8ZAdmu+17LLag8Thp3f4V6Z+UUstiUyaux5uglurKgFU=", "ansible_vetha922bfd7": {"macaddress": "8a:ef:63:cf:bc:5d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha922bfd7", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::88ef:63ff:fecf:bc5d"}], "active": true, "speed": 10000}, "ansible_user_gid": 0, "ansible_system_vendor": "VMware, Inc.", "ansible_swaptotal_mb": 0, "ansible_vetha2cfc91c": {"macaddress": "5e:b5:9c:7a:71:af", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha2cfc91c", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::5cb5:9cff:fe7a:71af"}], "active": true, "speed": 10000}, "ansible_veth22dd31be": {"macaddress": "8e:f0:b3:3b:f7:2a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth22dd31be", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8cf0:b3ff:fe3b:f72a"}], "active": true, "speed": 10000}, "ansible_vethc6786e50": {"macaddress": "3e:36:c3:62:b4:4f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc6786e50", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3c36:c3ff:fe62:b44f"}], "active": true, "speed": 10000}, "ansible_user_shell": "/bin/bash", "ansible_vethdb98f96e": {"macaddress": "da:96:39:cb:8c:90", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethdb98f96e", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d896:39ff:fecb:8c90"}], "active": true, "speed": 10000}, "ansible_distribution_major_version": "7", "ansible_real_group_id": 0, "ansible_lsb": {"release": "7.5", "major_release": "7", "codename": "Maipo", "id": "RedHatEnterpriseServer", "description": "Red Hat Enterprise Linux Server release 7.5 (Maipo)"}, "ansible_vethd17d2559": {"macaddress": "36:c7:3a:2d:46:90", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd17d2559", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::34c7:3aff:fe2d:4690"}], "active": true, "speed": 10000}, "ansible_vethe0c8b0c9": {"macaddress": "9e:3f:00:af:92:6a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethe0c8b0c9", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::9c3f:ff:feaf:926a"}], "active": true, "speed": 10000}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC2a75vgP1T/LJ6WAcmCF3CC6uMcX6pJa+U3Meg+tMymiV0uoKpqcuVKWGMigSAApf4Bb7ig2DrD5ZIqlCmEWbxPtsKWF0/4IjvNy45Kf3I4GBL51VMyKSPcvkzLE7Dg3QX7sJgwJs+/bHqj9NagFTkqmCANWuT4+Kxbs/YSP+/TZp/mFt7TjTDSq2B4pVUCQBc1pS72q+ZNYXaIT+dK76PY2wNllYNSj0SXOE3uA4HtBlSOFnQN000rtoFIU9VqjIcMPRN4jaqkZr+wmFwYXdQtzk4Kh9RFCTwlz0b8PHAEsYKY0KASgUbuhGKw6uGKh5RenVAL3VHLfgPVag3dRLF", "ansible_user_gecos": "root", "ansible_ens192": {"macaddress": "00:50:56:aa:18:20", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "on", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:0b:00.0", "module": "vmxnet3", "mtu": 1500, "device": "ens192", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.30.81.255", "netmask": "255.255.255.0", "network": "172.30.81.0", "address": "172.30.81.89"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::250:56ff:feaa:1820"}], "active": true, "speed": 10000, "hw_timestamp_filters": []}, "ansible_processor_threads_per_core": 1, "ansible_system": "Linux", "ansible_veth755adbe8": {"macaddress": "b6:f2:68:d6:5d:f3", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth755adbe8", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b4f2:68ff:fed6:5df3"}], "active": true, "speed": 10000}, "ansible_all_ipv4_addresses": ["172.17.0.1", "172.18.16.1", "172.30.81.89"], "ansible_python_version": "2.7.5", "ansible_vetha64e9def": {"macaddress": "ca:ad:06:a4:c6:48", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha64e9def", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c8ad:6ff:fea4:c648"}], "active": true, "speed": 10000}, "ansible_product_version": "None", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 15868, "used": 15579, "free": 289}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 7054, "free": 8814}}, "ansible_user_dir": "/root", "gather_subset": ["all"], "ansible_real_user_id": 0, "ansible_vetha6a17b1e": {"macaddress": "76:e3:84:cb:bd:a4", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha6a17b1e", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::74e3:84ff:fecb:bda4"}], "active": true, "speed": 10000}, "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["172.30.81.89"], "search": ["cluster.local", "os.ad.scanplus.de"]}, "ansible_effective_group_id": 0, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 15868, "ansible_device_links": {"masters": {"sdb1": ["dm-5"], "sda2": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "sdc": ["dm-4"]}, "labels": {"dm-2": ["lv_home"], "dm-3": ["lv_var"], "dm-1": ["lv_root"]}, "ids": {"sdb1": ["lvm-pv-uuid-18ExYn-mhtK-6x6W-saFx-6iz0-tMcO-qFuyT4"], "sr0": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "sda2": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "sdc": ["lvm-pv-uuid-rfmtT9-eD1k-OmSp-pQYM-cu8G-5POq-cK9FGX"], "dm-4": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "dm-5": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-lH4a7tKx8BVV5b1VJ0nRf0gS8yNIoqBFG6mgeorTH3CXxlGTFlNv2Oj3CoYVm4h6"], "dm-2": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "dm-3": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "dm-0": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "dm-1": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"]}, "uuids": {"sda1": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"], "dm-4": ["fdb049bd-4891-444b-8645-be3db36d3a8c"], "dm-5": ["2f7c2be6-d57f-49c4-92c2-737067c69f34"], "dm-2": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"], "dm-3": ["00fdfc93-06a0-4049-a073-c5f715f53604"], "dm-0": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"], "dm-1": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}}, "ansible_veth6d30d821": {"macaddress": "8e:a3:14:f0:4b:d9", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth6d30d821", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8ca3:14ff:fef0:4bd9"}], "active": true, "speed": 10000}, "ansible_apparmor": {"status": "disabled"}, "ansible_memfree_mb": 289, "ansible_lvm": {"pvs": {"/dev/sdb1": {"free_g": "0", "size_g": "30.00", "vg": "vg0-docker"}, "/dev/sdc": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sda2": {"free_g": "0", "size_g": "28.00", "vg": "vg01"}}, "lvs": {"swap": {"size_g": "4.00", "vg": "vg01"}, "dockerlv": {"size_g": "30.00", "vg": "vg0-docker"}, "var_log": {"size_g": "13.99", "vg": "vg01"}, "var": {"size_g": "6.00", "vg": "vg01"}, "home": {"size_g": "4.00", "vg": "vg01"}, "root": {"size_g": "10.00", "vg": "vg01"}}, "vgs": {"vg01": {"free_g": "0", "size_g": "37.99", "num_lvs": "5", "num_pvs": "2"}, "vg0-docker": {"free_g": "0", "size_g": "30.00", "num_lvs": "1", "num_pvs": "1"}}}, "ansible_processor_count": 8, "ansible_hostname": "sp-os-node06", "ansible_tun0": {"macaddress": "c6:1b:f0:8e:0e:5e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "tun0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.18.17.255", "netmask": "255.255.254.0", "network": "172.18.16.0", "address": "172.18.16.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c41b:f0ff:fe8e:e5e"}], "active": true, "type": "ether"}, "ansible_interfaces": ["vetha64e9def", "veth939dbfd8", "veth4c7d0cb6", "tun0", "vetha6a17b1e", "veth4e771f2c", "vethb65ada1e", "veth7b0435da", "vethc6786e50", "vethdb98f96e", "vethaf4a6ae0", "vethe0bc9d77", "vethf518e52d", "lo", "veth1707713b", "veth22dd31be", "vethd17d2559", "veth59d6c467", "veth31307b5a", "veth35a42e43", "ovs-system", "vethba48cd1e", "veth9d47e360", "vetha922bfd7", "docker0", "veth755adbe8", "veth27ecf2e0", "vethf44b58b2", "vethfbc701ec", "vetha2cfc91c", "br0", "veth49536b9e", "veth0a604005", "veth6d30d821", "veth7cf5cfef", "vethdcb7245a", "vethfeb2cbf6", "veth55f62167", "veth40553e76", "vethe0c8b0c9", "veth6a9e5018", "veth5eda38f9", "veth9ba652fe", "vethc346da89", "vethea3aab97", "veth6c8b0820", "ens192", "vxlan_sys_4789"], "ansible_machine_id": "d768f1f16c8043df9d09ccf8ab47a75c", "ansible_fqdn": "sp-os-node06.os.ad.scanplus.de", "ansible_mounts": [{"block_used": 713084, "uuid": "e36965f9-43c4-4739-9fd6-48d5e91ae531", "size_total": 10434990080, "block_total": 2547605, "mount": "/", "block_available": 1834521, "size_available": 7514198016, "fstype": "ext4", "inode_total": 654080, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-root", "inode_used": 57319, "block_size": 4096, "inode_available": 596761}, {"block_used": 73777, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "size_total": 2046640128, "block_total": 499668, "mount": "/boot", "block_available": 425891, "size_available": 1744449536, "fstype": "ext4", "inode_total": 131072, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/sda1", "inode_used": 353, "block_size": 4096, "inode_available": 130719}, {"block_used": 59319, "uuid": "448b53b9-3193-40d6-a9e4-8eea58184ff3", "size_total": 4061331456, "block_total": 991536, "mount": "/home", "block_available": 932217, "size_available": 3818360832, "fstype": "ext4", "inode_total": 256000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-home", "inode_used": 33, "block_size": 4096, "inode_available": 255967}, {"block_used": 937932, "uuid": "00fdfc93-06a0-4049-a073-c5f715f53604", "size_total": 6208094208, "block_total": 1515648, "mount": "/var", "block_available": 577716, "size_available": 2366324736, "fstype": "ext4", "inode_total": 389376, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var", "inode_used": 6721, "block_size": 4096, "inode_available": 382655}, {"block_used": 2487124, "uuid": "fdb049bd-4891-444b-8645-be3db36d3a8c", "size_total": 14726512640, "block_total": 3595340, "mount": "/var/log", "block_available": 1108216, "size_available": 4539252736, "fstype": "ext4", "inode_total": 896000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var_log", "inode_used": 332, "block_size": 4096, "inode_available": 895668}, {"block_used": 4478010, "uuid": "2f7c2be6-d57f-49c4-92c2-737067c69f34", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker", "block_available": 3381447, "size_available": 13850406912, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 934194, "block_size": 4096, "inode_available": 14792398}, {"block_used": 4478010, "uuid": "2f7c2be6-d57f-49c4-92c2-737067c69f34", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/containers", "block_available": 3381447, "size_available": 13850406912, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 934194, "block_size": 4096, "inode_available": 14792398}, {"block_used": 4478010, "uuid": "2f7c2be6-d57f-49c4-92c2-737067c69f34", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/overlay2", "block_available": 3381447, "size_available": 13850406912, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 934194, "block_size": 4096, "inode_available": 14792398}], "ansible_nodename": "sp-os-node06.os.ad.scanplus.de", "ansible_distribution_file_search_string": "Red Hat", "ansible_veth9d47e360": {"macaddress": "b6:03:6d:50:be:15", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth9d47e360", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b403:6dff:fe50:be15"}], "active": true, "speed": 10000}, "ansible_vethdcb7245a": {"macaddress": "9e:90:24:45:13:92", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethdcb7245a", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::9c90:24ff:fe45:1392"}], "active": true, "speed": 10000}, "ansible_veth27ecf2e0": {"macaddress": "3e:6e:19:48:16:49", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth27ecf2e0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3c6e:19ff:fe48:1649"}], "active": true, "speed": 10000}, "ansible_domain": "os.ad.scanplus.de", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIPqqYpYRxRKqFk9prdi6VO/dFbuSpUN+n6x0HCSfQiqC", "ansible_processor_cores": 1, "ansible_veth40553e76": {"macaddress": "6a:3f:c2:9d:25:d1", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth40553e76", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::683f:c2ff:fe9d:25d1"}], "active": true, "speed": 10000}, "ansible_bios_version": "6.00", "ansible_veth59d6c467": {"macaddress": "62:0e:6d:7c:12:59", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth59d6c467", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::600e:6dff:fe7c:1259"}], "active": true, "speed": 10000}, "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20190109T153929", "tz": "CET", "weeknumber": "01", "hour": "15", "year": "2019", "minute": "39", "tz_offset": "+0100", "month": "01", "epoch": "1547044769", "iso8601_micro": "2019-01-09T14:39:29.305648Z", "weekday": "Wednesday", "time": "15:39:29", "date": "2019-01-09", "iso8601": "2019-01-09T14:39:29Z", "day": "09", "iso8601_basic": "20190109T153929305534", "second": "29"}, "ansible_veth0a604005": {"macaddress": "82:48:55:4d:03:21", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth0a604005", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8048:55ff:fe4d:321"}], "active": true, "speed": 10000}, "ansible_distribution_release": "Maipo", "ansible_vethb65ada1e": {"macaddress": "b6:b9:4b:33:c6:db", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethb65ada1e", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b4b9:4bff:fe33:c6db"}], "active": true, "speed": 10000}, "ansible_vethe0bc9d77": {"macaddress": "3a:26:65:58:78:b2", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethe0bc9d77", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3826:65ff:fe58:78b2"}], "active": true, "speed": 10000}, "ansible_effective_user_id": 0, "ansible_product_name": "VMware Virtual Platform", "ansible_veth6c8b0820": {"macaddress": "56:8b:89:ec:b2:88", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth6c8b0820", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::548b:89ff:feec:b288"}], "active": true, "speed": 10000}, "ansible_devices": {"sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "NECVMWar", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: VMware SATA AHCI controller", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "VMware SATA CD00", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "labels": [], "ids": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "uuids": []}, "sectors": "58718208", "start": "4196352", "holders": ["vg01-swap", "vg01-root", "vg01-home", "vg01-var", "vg01-var_log"], "size": "28.00 GB"}, "sda1": {"sectorsize": 512, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"]}, "sectors": "4194304", "start": "2048", "holders": [], "size": "2.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdb": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdb1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-5"], "labels": [], "ids": ["lvm-pv-uuid-18ExYn-mhtK-6x6W-saFx-6iz0-tMcO-qFuyT4"], "uuids": []}, "sectors": "62912512", "start": "2048", "holders": ["vg0--docker-dockerlv"], "size": "30.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdc": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-4"], "labels": [], "ids": ["lvm-pv-uuid-rfmtT9-eD1k-OmSp-pQYM-cu8G-5POq-cK9FGX"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "10.00 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "29343744", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "uuids": ["fdb049bd-4891-444b-8645-be3db36d3a8c"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "13.99 GB"}, "dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62906368", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-lH4a7tKx8BVV5b1VJ0nRf0gS8yNIoqBFG6mgeorTH3CXxlGTFlNv2Oj3CoYVm4h6"], "uuids": ["2f7c2be6-d57f-49c4-92c2-737067c69f34"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "30.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8380416", "links": {"masters": [], "labels": ["lv_home"], "ids": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "uuids": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "12582912", "links": {"masters": [], "labels": ["lv_var"], "ids": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "uuids": ["00fdfc93-06a0-4049-a073-c5f715f53604"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "6.00 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8396800", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "uuids": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": ["lv_root"], "ids": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"], "uuids": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}}, "ansible_veth1707713b": {"macaddress": "b2:6a:06:d7:d2:30", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth1707713b", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b06a:6ff:fed7:d230"}], "active": true, "speed": 10000}, "ansible_user_uid": 0, "ansible_vethfeb2cbf6": {"macaddress": "7a:06:54:02:2d:d6", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethfeb2cbf6", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7806:54ff:fe02:2dd6"}], "active": true, "speed": 10000}, "ansible_veth31307b5a": {"macaddress": "d6:32:a4:a5:c2:7c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth31307b5a", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d432:a4ff:fea5:c27c"}], "active": true, "speed": 10000}, "ansible_veth4c7d0cb6": {"macaddress": "aa:4b:dd:0e:99:23", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth4c7d0cb6", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a84b:ddff:fe0e:9923"}], "active": true, "speed": 10000}, "ansible_bios_date": "09/21/2015", "ansible_vethf44b58b2": {"macaddress": "42:36:e7:a1:f4:ff", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf44b58b2", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4036:e7ff:fea1:f4ff"}], "active": true, "speed": 10000}, "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_vethf518e52d": {"macaddress": "b2:5a:a2:4a:04:bb", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf518e52d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b05a:a2ff:fe4a:4bb"}], "active": true, "speed": 10000}, "ansible_br0": {"macaddress": "ba:31:02:7c:bc:45", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "br0", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}}}\n', '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nUSE OF THIS COMPUTER SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS SYSTEM.\nUNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION.\nEVIDENCE OF UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE, CRIMINAL, OR OTHER ADVERSE ACTION.\nUSE OF THIS SYSTEM CONSTITUTES CONSENT TO MONITORING FOR THESE PURPOSES.\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n') ok: [sp-os-node02.os.ad.scanplus.de] ok: [sp-os-node11.os.ad.scanplus.de] ok: [sp-os-node05.os.ad.scanplus.de] (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_veth530f884d": {"macaddress": "0e:b1:55:8a:8e:96", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth530f884d", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::cb1:55ff:fe8a:8e96"}], "active": true, "speed": 10000}, "module_setup": true, "ansible_distribution_version": "7.5", "ansible_veth2ad2c862": {"macaddress": "92:6b:38:0b:b0:46", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth2ad2c862", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::906b:38ff:fe0b:b046"}], "active": true, "speed": 10000}, "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LESSOPEN": "||/usr/bin/lesspipe.sh %s", "SSH_CLIENT": "172.30.80.240 46064 22", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "root", "USER": "root", "QTDIR": "/usr/lib64/qt-3.3", "PATH": "/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "LANG": "en_US.UTF-8", "QTLIB": "/usr/lib64/qt-3.3/lib", "SHELL": "/bin/bash", "QTINC": "/usr/lib64/qt-3.3/include", "HOME": "/root", "XDG_RUNTIME_DIR": "/run/user/0", "SELINUX_ROLE_REQUESTED": "", "QT_GRAPHICSSYSTEM_CHECKED": "1", "XDG_SESSION_ID": "52960", "_": "/usr/bin/python", "SELINUX_LEVEL_REQUESTED": "", "SHLVL": "2", "PWD": "/root", "MAIL": "/var/mail/root", "SSH_CONNECTION": "172.30.80.240 46064 172.30.81.90 22"}, "ansible_vethf1f0adea": {"macaddress": "7e:ec:38:39:4d:93", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf1f0adea", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7cec:38ff:fe39:4d93"}], "active": true, "speed": 10000}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "00:50:56:aa:12:f9", "network": "172.30.81.0", "mtu": 1500, "broadcast": "172.30.81.255", "alias": "ens192", "netmask": "255.255.255.0", "address": "172.30.81.90", "interface": "ens192", "type": "ether", "gateway": "172.30.81.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-862.11.6.el7.x86_64", "quiet": true, "vconsole.font": "latarcyrheb-sun16", "rhgb": true, "rd.lvm.lv": "vg01/root", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/vg01-root", "vconsole.keymap": "de"}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_ovs_system": {"macaddress": "ae:e2:58:99:a9:18", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1500, "device": "ovs-system", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_distribution_file_search_string": "Red Hat", "ansible_product_uuid": "422AA3AF-4194-66BE-734A-E269F11EDB55", "ansible_pkg_mgr": "yum", "ansible_distribution": "RedHat", "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:fc99f6ac39ed", "ansible_vethdc8321fd": {"macaddress": "6e:bd:f8:f6:12:aa", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethdc8321fd", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::6cbd:f8ff:fef6:12aa"}], "active": true, "speed": 10000}, "ansible_veth0b5f6283": {"macaddress": "26:15:90:b0:b1:4f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth0b5f6283", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::2415:90ff:feb0:b14f"}], "active": true, "speed": 10000}, "ansible_all_ipv6_addresses": ["fe80::6cd6:88ff:feab:40e0", "fe80::7cc9:6dff:fe3d:4be5", "fe80::c4c4:deff:feb4:384c", "fe80::102d:c6ff:fe1b:ce00", "fe80::b0a5:caff:fee6:ad67", "fe80::ec2d:60ff:fe6e:c694", "fe80::7897:1bff:febd:aecb", "fe80::9424:97ff:fe38:2507", "fe80::d884:7eff:fea3:bce9", "fe80::4d2:4aff:fe93:2433", "fe80::2445:31ff:fe99:8e81", "fe80::906b:38ff:fe0b:b046", "fe80::d0c1:e5ff:fede:d57e", "fe80::64fc:37ff:fef8:92f4", "fe80::cc08:45ff:feae:2bb1", "fe80::e0c4:5bff:fe85:7a", "fe80::6cbd:f8ff:fef6:12aa", "fe80::18b1:4aff:fead:e4a1", "fe80::7408:82ff:fee4:c983", "fe80::ccbf:6fff:fee3:6e42", "fe80::30b5:ddff:fe50:750f", "fe80::8ab:3bff:fe3d:d41b", "fe80::d47d:a9ff:fe8e:cdf4", "fe80::d84a:2eff:feba:878c", "fe80::bc2a:d2ff:fe3f:281", "fe80::c0d9:18ff:fe57:828f", "fe80::8866:46ff:fed6:1ce4", "fe80::b41e:42ff:fe15:b77a", "fe80::f421:6bff:feaf:473c", "fe80::d459:ddff:fed9:d415", "fe80::acfd:92ff:fe0e:f381", "fe80::e853:cff:fe9d:d7a1", "fe80::e463:80ff:fefe:5d17", "fe80::d49a:a0ff:fed9:5f01", "fe80::cb1:55ff:fe8a:8e96", "fe80::fcd7:71ff:fe26:308f", "fe80::bc59:b1ff:fe0d:90ac", "fe80::a82c:eaff:fe07:7183", "fe80::d89e:6ff:fe40:fc90", "fe80::250:56ff:feaa:12f9", "fe80::8cb:33ff:fe3a:963f", "fe80::ac12:d0ff:feaf:df53", "fe80::6c3c:57ff:fea1:d38e", "fe80::7cec:38ff:fe39:4d93", "fe80::2415:90ff:feb0:b14f", "fe80::600e:e7ff:fec4:801a"], "ansible_uptime_seconds": 10165034, "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_user_shell": "/bin/bash", "ansible_veth3df5cc2a": {"macaddress": "66:fc:37:f8:92:f4", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth3df5cc2a", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::64fc:37ff:fef8:92f4"}], "active": true, "speed": 10000}, "ansible_product_serial": "VMware-42 2a a3 af 41 94 66 be-73 4a e2 69 f1 1e db 55", "ansible_form_factor": "Other", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_veth973e8754": {"macaddress": "da:9e:06:40:fc:90", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth973e8754", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d89e:6ff:fe40:fc90"}], "active": true, "speed": 10000}, "ansible_local": {"openshift": {"node": {"schedulable": "false", "labels": {"nodeusage": "dev", "region": "primary", "zone": "RZ-LM07"}, "proxy_mode": "iptables", "dns_ip": "172.30.81.90", "bootstrapped": true}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "master": {}, "common": {"system_images_registry": "registry.access.redhat.com", "etcd_runtime": "host", "is_etcd_system_container": false, "deployment_subtype": "basic", "is_master_system_container": false, "is_containerized": false, "is_node_system_container": false, "portal_net": "172.18.128.0/17", "generate_no_proxy_hosts": true, "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise"}, "cloudprovider": {}}}, "ansible_vxlan_sys_4789": {"macaddress": "76:08:82:e4:c9:83", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "off [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65000, "device": "vxlan_sys_4789", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7408:82ff:fee4:c983"}], "active": true, "type": "ether"}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:98:fc:88:11", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "on", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": [], "id": "8000.024298fc8811", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "active": false, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "2", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "3", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "4", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "5", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "6", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "7", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK7DUJCSPsABkiUO1a+JZLRhBmXohKor9qHdGPUsW87ylveBgw8itVLz+WBSho/202IF60AYuXBzh8UWsLgKjgc=", "ansible_mounts": [{"block_used": 714687, "uuid": "e36965f9-43c4-4739-9fd6-48d5e91ae531", "size_total": 10434990080, "block_total": 2547605, "mount": "/", "block_available": 1832918, "size_available": 7507632128, "fstype": "ext4", "inode_total": 654080, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-root", "inode_used": 57325, "block_size": 4096, "inode_available": 596755}, {"block_used": 73779, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "size_total": 2046640128, "block_total": 499668, "mount": "/boot", "block_available": 425889, "size_available": 1744441344, "fstype": "ext4", "inode_total": 131072, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/sda1", "inode_used": 353, "block_size": 4096, "inode_available": 130719}, {"block_used": 72828, "uuid": "448b53b9-3193-40d6-a9e4-8eea58184ff3", "size_total": 4061331456, "block_total": 991536, "mount": "/home", "block_available": 918708, "size_available": 3763027968, "fstype": "ext4", "inode_total": 256000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-home", "inode_used": 34, "block_size": 4096, "inode_available": 255966}, {"block_used": 511111, "uuid": "00fdfc93-06a0-4049-a073-c5f715f53604", "size_total": 6208094208, "block_total": 1515648, "mount": "/var", "block_available": 1004537, "size_available": 4114583552, "fstype": "ext4", "inode_total": 389376, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var", "inode_used": 11696, "block_size": 4096, "inode_available": 377680}, {"block_used": 2460053, "uuid": "fdb049bd-4891-444b-8645-be3db36d3a8c", "size_total": 14726512640, "block_total": 3595340, "mount": "/var/log", "block_available": 1135287, "size_available": 4650135552, "fstype": "ext4", "inode_total": 896000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var_log", "inode_used": 330, "block_size": 4096, "inode_available": 895670}, {"block_used": 6534682, "uuid": "676dc9a3-588f-4746-9160-82d7cdbd4064", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker", "block_available": 1324775, "size_available": 5426278400, "fstype": "xfs", "inode_total": 12021176, "options": "rw,seclabel,relatime,attr2,inode64,prjquota", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 1422506, "block_size": 4096, "inode_available": 10598670}, {"block_used": 6534682, "uuid": "676dc9a3-588f-4746-9160-82d7cdbd4064", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/containers", "block_available": 1324775, "size_available": 5426278400, "fstype": "xfs", "inode_total": 12021176, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 1422506, "block_size": 4096, "inode_available": 10598670}, {"block_used": 6534682, "uuid": "676dc9a3-588f-4746-9160-82d7cdbd4064", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/overlay2", "block_available": 1324775, "size_available": 5426278400, "fstype": "xfs", "inode_total": 12021176, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 1422506, "block_size": 4096, "inode_available": 10598670}], "ansible_system_vendor": "VMware, Inc.", "ansible_lvm": {"pvs": {"/dev/sdb1": {"free_g": "0", "size_g": "30.00", "vg": "vg0-docker"}, "/dev/sdc": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sda2": {"free_g": "0", "size_g": "28.00", "vg": "vg01"}}, "lvs": {"swap": {"size_g": "4.00", "vg": "vg01"}, "dockerlv": {"size_g": "30.00", "vg": "vg0-docker"}, "var_log": {"size_g": "13.99", "vg": "vg01"}, "var": {"size_g": "6.00", "vg": "vg01"}, "home": {"size_g": "4.00", "vg": "vg01"}, "root": {"size_g": "10.00", "vg": "vg01"}}, "vgs": {"vg01": {"free_g": "0", "size_g": "37.99", "num_lvs": "5", "num_pvs": "2"}, "vg0-docker": {"free_g": "0", "size_g": "30.00", "num_lvs": "1", "num_pvs": "1"}}}, "ansible_swaptotal_mb": 0, "ansible_veth08c40b8a": {"macaddress": "b2:a5:ca:e6:ad:67", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth08c40b8a", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b0a5:caff:fee6:ad67"}], "active": true, "speed": 10000}, "ansible_distribution_major_version": "7", "ansible_vethcb94b328": {"macaddress": "e6:63:80:fe:5d:17", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethcb94b328", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e463:80ff:fefe:5d17"}], "active": true, "speed": 10000}, "ansible_real_group_id": 0, "ansible_lsb": {"release": "7.5", "major_release": "7", "codename": "Maipo", "id": "RedHatEnterpriseServer", "description": "Red Hat Enterprise Linux Server release 7.5 (Maipo)"}, "ansible_tun0": {"macaddress": "b6:1e:42:15:b7:7a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "tun0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.18.21.255", "netmask": "255.255.254.0", "network": "172.18.20.0", "address": "172.18.20.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b41e:42ff:fe15:b77a"}], "active": true, "type": "ether"}, "ansible_veth80bc4a67": {"macaddress": "aa:2c:ea:07:71:83", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth80bc4a67", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a82c:eaff:fe07:7183"}], "active": true, "speed": 10000}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQCpM3mcVUouPcbyFgVGc5pSLFLBB3dTXBindFqvEsvzY7H7596pcFW0ttTen1Kjo9KwLVesGFiZcPOGQk0QLC3Y2liLidSX7CMSch8rTIbkzQV6n+jhtm8BHQzCgdKr23c//tBuxcPL3gMzFBkw8u+eTUB11xAQBt7vN2w/FKDtb4rU8FjV2zE7X6c0XGTa9BsRz9QVzzi9lc65PPWPEd49dT7mg0AiaeZbU/IZOqOoPRXpMZP3GjGrUFCRY97yrIVmrvR4JozVCNrcoMlPWXOseVI6lFNUcRQRLF8QUdU5b+XISGZjmLnrQwvFbUnm2lzK4K3mppHhT+Plh7cTLWIR", "ansible_user_gecos": "root", "ansible_ens192": {"macaddress": "00:50:56:aa:12:f9", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "on", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:0b:00.0", "module": "vmxnet3", "mtu": 1500, "device": "ens192", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.30.81.255", "netmask": "255.255.255.0", "network": "172.30.81.0", "address": "172.30.81.90"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::250:56ff:feaa:12f9"}], "active": true, "speed": 10000, "hw_timestamp_filters": []}, "ansible_veth06376d13": {"macaddress": "1a:b1:4a:ad:e4:a1", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth06376d13", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::18b1:4aff:fead:e4a1"}], "active": true, "speed": 10000}, "ansible_processor_threads_per_core": 1, "ansible_vethec87bc86": {"macaddress": "fe:d7:71:26:30:8f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethec87bc86", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fcd7:71ff:fe26:308f"}], "active": true, "speed": 10000}, "ansible_product_name": "VMware Virtual Platform", "ansible_veth3ce89486": {"macaddress": "0a:ab:3b:3d:d4:1b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth3ce89486", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8ab:3bff:fe3d:d41b"}], "active": true, "speed": 10000}, "ansible_all_ipv4_addresses": ["172.17.0.1", "172.18.20.1", "172.30.81.90"], "ansible_python_version": "2.7.5", "ansible_veth361f77b6": {"macaddress": "6e:d6:88:ab:40:e0", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth361f77b6", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::6cd6:88ff:feab:40e0"}], "active": true, "speed": 10000}, "ansible_product_version": "None", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 15868, "used": 15584, "free": 284}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 7577, "free": 8291}}, "ansible_user_dir": "/root", "ansible_veth76101c65": {"macaddress": "d2:c1:e5:de:d5:7e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth76101c65", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d0c1:e5ff:fede:d57e"}], "active": true, "speed": 10000}, "ansible_vethc949e305": {"macaddress": "ea:53:0c:9d:d7:a1", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc949e305", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e853:cff:fe9d:d7a1"}], "active": true, "speed": 10000}, "gather_subset": ["all"], "ansible_veth17069f21": {"macaddress": "ee:2d:60:6e:c6:94", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth17069f21", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ec2d:60ff:fe6e:c694"}], "active": true, "speed": 10000}, "ansible_real_user_id": 0, "ansible_vethf13d7bb5": {"macaddress": "c6:c4:de:b4:38:4c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf13d7bb5", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c4c4:deff:feb4:384c"}], "active": true, "speed": 10000}, "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["172.30.81.90"], "search": ["cluster.local", "os.ad.scanplus.de"]}, "ansible_effective_group_id": 0, "ansible_vethd3784a26": {"macaddress": "e2:c4:5b:85:00:7a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd3784a26", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e0c4:5bff:fe85:7a"}], "active": true, "speed": 10000}, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 15868, "ansible_device_links": {"masters": {"sdb1": ["dm-5"], "sda2": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "sdc": ["dm-4"]}, "labels": {"dm-2": ["lv_home"], "dm-3": ["lv_var"], "dm-1": ["lv_root"]}, "ids": {"sdb1": ["lvm-pv-uuid-KTC1dD-77r0-pzs0-vXxt-dYRr-JBjc-EXczW1"], "sr0": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "sda2": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "sdc": ["lvm-pv-uuid-7LEBzK-UzN1-HpBD-XnCj-aon4-MO1b-3KDgKq"], "dm-4": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "dm-5": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-lAyV2Xb1oQyU11BlV8ddpMhZsspgIyNJxbfQVDFhmotzmcwUaKC2cnKLR8HZahdJ"], "dm-2": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "dm-3": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "dm-0": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "dm-1": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"]}, "uuids": {"sda1": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"], "dm-4": ["fdb049bd-4891-444b-8645-be3db36d3a8c"], "dm-5": ["676dc9a3-588f-4746-9160-82d7cdbd4064"], "dm-2": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"], "dm-3": ["00fdfc93-06a0-4049-a073-c5f715f53604"], "dm-0": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"], "dm-1": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_veth9fb806ea": {"macaddress": "ce:bf:6f:e3:6e:42", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth9fb806ea", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ccbf:6fff:fee3:6e42"}], "active": true, "speed": 10000}, "ansible_vethf1db0adb": {"macaddress": "62:0e:e7:c4:80:1a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf1db0adb", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::600e:e7ff:fec4:801a"}], "active": true, "speed": 10000}, "ansible_veth6b7bc9e2": {"macaddress": "26:45:31:99:8e:81", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth6b7bc9e2", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::2445:31ff:fe99:8e81"}], "active": true, "speed": 10000}, "ansible_veth33856d46": {"macaddress": "8a:66:46:d6:1c:e4", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth33856d46", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8866:46ff:fed6:1ce4"}], "active": true, "speed": 10000}, "ansible_memfree_mb": 284, "ansible_system": "Linux", "ansible_vethef440b9e": {"macaddress": "ce:08:45:ae:2b:b1", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethef440b9e", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::cc08:45ff:feae:2bb1"}], "active": true, "speed": 10000}, "ansible_processor_count": 8, "ansible_hostname": "sp-os-node07", "ansible_vetha5f2495f": {"macaddress": "da:84:7e:a3:bc:e9", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha5f2495f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d884:7eff:fea3:bce9"}], "active": true, "speed": 10000}, "ansible_vethc25fe4b5": {"macaddress": "d6:9a:a0:d9:5f:01", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc25fe4b5", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d49a:a0ff:fed9:5f01"}], "active": true, "speed": 10000}, "ansible_interfaces": ["veth2ad2c862", "vethec87bc86", "vethef440b9e", "tun0", "veth17069f21", "vethcb94b328", "veth3df5cc2a", "veth6e790ef8", "veth938ee937", "lo", "veth0b5f6283", "vxlan_sys_4789", "veth8d27c966", "vethfd8e4ff8", "veth99a41649", "veth08c40b8a", "veth30e62326", "veth33856d46", "vethf13d7bb5", "veth530f884d", "veth3ce89486", "vetha5f2495f", "veth6fb025dd", "docker0", "br0", "vethd3784a26", "veth06376d13", "veth1975386d", "vethdc8321fd", "veth361f77b6", "veth973e8754", "veth4ec7314b", "veth25054a94", "veth39208e5f", "vethc949e305", "veth9bc61451", "vetha6b3cbae", "vethf84e316d", "vethf1db0adb", "veth9fb806ea", "vethf3b03bfc", "vethf1f0adea", "vethc25fe4b5", "ovs-system", "vethef778d47", "veth76101c65", "ens192", "vethc6cb2231", "veth80bc4a67", "veth6b7bc9e2"], "ansible_machine_id": "d768f1f16c8043df9d09ccf8ab47a75c", "ansible_vethef778d47": {"macaddress": "32:b5:dd:50:75:0f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethef778d47", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::30b5:ddff:fe50:750f"}], "active": true, "speed": 10000}, "ansible_veth938ee937": {"macaddress": "96:24:97:38:25:07", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth938ee937", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::9424:97ff:fe38:2507"}], "active": true, "speed": 10000}, "ansible_veth6fb025dd": {"macaddress": "f6:21:6b:af:47:3c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth6fb025dd", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f421:6bff:feaf:473c"}], "active": true, "speed": 10000}, "ansible_fqdn": "sp-os-node07.os.ad.scanplus.de", "ansible_user_gid": 0, "ansible_veth8d27c966": {"macaddress": "ae:12:d0:af:df:53", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth8d27c966", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ac12:d0ff:feaf:df53"}], "active": true, "speed": 10000}, "ansible_vetha6b3cbae": {"macaddress": "d6:59:dd:d9:d4:15", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha6b3cbae", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d459:ddff:fed9:d415"}], "active": true, "speed": 10000}, "ansible_veth39208e5f": {"macaddress": "ae:fd:92:0e:f3:81", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth39208e5f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::acfd:92ff:fe0e:f381"}], "active": true, "speed": 10000}, "ansible_nodename": "sp-os-node07.os.ad.scanplus.de", "ansible_userspace_architecture": "x86_64", "ansible_br0": {"macaddress": "ea:88:8f:2b:f8:49", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "br0", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_veth1975386d": {"macaddress": "06:d2:4a:93:24:33", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth1975386d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4d2:4aff:fe93:2433"}], "active": true, "speed": 10000}, "ansible_domain": "os.ad.scanplus.de", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIFNWQVqKIldv4qq13tJKuZ0KKs6lEvZfsX+4o7pknk8A", "ansible_processor_cores": 1, "ansible_vethf3b03bfc": {"macaddress": "7a:97:1b:bd:ae:cb", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf3b03bfc", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7897:1bff:febd:aecb"}], "active": true, "speed": 10000}, "ansible_bios_version": "6.00", "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20190109T153929", "tz": "CET", "weeknumber": "01", "hour": "15", "year": "2019", "minute": "39", "tz_offset": "+0100", "month": "01", "epoch": "1547044769", "iso8601_micro": "2019-01-09T14:39:29.133329Z", "weekday": "Wednesday", "time": "15:39:29", "date": "2019-01-09", "iso8601": "2019-01-09T14:39:29Z", "day": "09", "iso8601_basic": "20190109T153929133202", "second": "29"}, "ansible_veth4ec7314b": {"macaddress": "7e:c9:6d:3d:4b:e5", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth4ec7314b", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7cc9:6dff:fe3d:4be5"}], "active": true, "speed": 10000}, "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_veth25054a94": {"macaddress": "d6:7d:a9:8e:cd:f4", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth25054a94", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d47d:a9ff:fe8e:cdf4"}], "active": true, "speed": 10000}, "ansible_vethf84e316d": {"macaddress": "12:2d:c6:1b:ce:00", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf84e316d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::102d:c6ff:fe1b:ce00"}], "active": true, "speed": 10000}, "ansible_veth9bc61451": {"macaddress": "be:2a:d2:3f:02:81", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth9bc61451", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::bc2a:d2ff:fe3f:281"}], "active": true, "speed": 10000}, "ansible_devices": {"sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "NECVMWar", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: VMware SATA AHCI controller", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "VMware SATA CD00", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "labels": [], "ids": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "uuids": []}, "sectors": "58718208", "start": "4196352", "holders": ["vg01-swap", "vg01-root", "vg01-home", "vg01-var", "vg01-var_log"], "size": "28.00 GB"}, "sda1": {"sectorsize": 512, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"]}, "sectors": "4194304", "start": "2048", "holders": [], "size": "2.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdb": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdb1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-5"], "labels": [], "ids": ["lvm-pv-uuid-KTC1dD-77r0-pzs0-vXxt-dYRr-JBjc-EXczW1"], "uuids": []}, "sectors": "62912512", "start": "2048", "holders": ["vg0--docker-dockerlv"], "size": "30.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdc": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-4"], "labels": [], "ids": ["lvm-pv-uuid-7LEBzK-UzN1-HpBD-XnCj-aon4-MO1b-3KDgKq"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "10.00 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "29343744", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "uuids": ["fdb049bd-4891-444b-8645-be3db36d3a8c"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "13.99 GB"}, "dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62906368", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-lAyV2Xb1oQyU11BlV8ddpMhZsspgIyNJxbfQVDFhmotzmcwUaKC2cnKLR8HZahdJ"], "uuids": ["676dc9a3-588f-4746-9160-82d7cdbd4064"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "30.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8380416", "links": {"masters": [], "labels": ["lv_home"], "ids": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "uuids": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "12582912", "links": {"masters": [], "labels": ["lv_var"], "ids": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "uuids": ["00fdfc93-06a0-4049-a073-c5f715f53604"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "6.00 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8396800", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "uuids": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": ["lv_root"], "ids": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"], "uuids": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}}, "ansible_vethc6cb2231": {"macaddress": "0a:cb:33:3a:96:3f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc6cb2231", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8cb:33ff:fe3a:963f"}], "active": true, "speed": 10000}, "ansible_user_uid": 0, "ansible_vethfd8e4ff8": {"macaddress": "da:4a:2e:ba:87:8c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethfd8e4ff8", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d84a:2eff:feba:878c"}], "active": true, "speed": 10000}, "ansible_veth99a41649": {"macaddress": "c2:d9:18:57:82:8f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth99a41649", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c0d9:18ff:fe57:828f"}], "active": true, "speed": 10000}, "ansible_bios_date": "09/21/2015", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_veth30e62326": {"macaddress": "be:59:b1:0d:90:ac", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth30e62326", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::bc59:b1ff:fe0d:90ac"}], "active": true, "speed": 10000}, "ansible_veth6e790ef8": {"macaddress": "6e:3c:57:a1:d3:8e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth6e790ef8", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::6c3c:57ff:fea1:d38e"}], "active": true, "speed": 10000}}}\n', '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nUSE OF THIS COMPUTER SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS SYSTEM.\nUNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION.\nEVIDENCE OF UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE, CRIMINAL, OR OTHER ADVERSE ACTION.\nUSE OF THIS SYSTEM CONSTITUTES CONSENT TO MONITORING FOR THESE PURPOSES.\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n') ok: [sp-os-node10.os.ad.scanplus.de] (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"module_setup": true, "ansible_veth0bbd3a5a": {"macaddress": "ba:e6:f4:2b:c3:9f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth0bbd3a5a", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b8e6:f4ff:fe2b:c39f"}], "active": true, "speed": 10000}, "ansible_distribution_version": "7.5", "ansible_vethe52c4202": {"macaddress": "fe:84:4b:af:b1:c7", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethe52c4202", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fc84:4bff:feaf:b1c7"}], "active": true, "speed": 10000}, "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LESSOPEN": "||/usr/bin/lesspipe.sh %s", "SSH_CLIENT": "172.30.80.240 33344 22", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "root", "USER": "root", "QTDIR": "/usr/lib64/qt-3.3", "PATH": "/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "LANG": "en_US.UTF-8", "QTLIB": "/usr/lib64/qt-3.3/lib", "SHELL": "/bin/bash", "QTINC": "/usr/lib64/qt-3.3/include", "HOME": "/root", "XDG_RUNTIME_DIR": "/run/user/0", "SELINUX_ROLE_REQUESTED": "", "QT_GRAPHICSSYSTEM_CHECKED": "1", "XDG_SESSION_ID": "52924", "_": "/usr/bin/python", "SELINUX_LEVEL_REQUESTED": "", "SHLVL": "2", "PWD": "/root", "MAIL": "/var/mail/root", "SSH_CONNECTION": "172.30.80.240 33344 172.30.81.91 22"}, "ansible_veth49d4b5bf": {"macaddress": "5a:f4:b1:3c:cb:66", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth49d4b5bf", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::58f4:b1ff:fe3c:cb66"}], "active": true, "speed": 10000}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "00:50:56:aa:47:49", "network": "172.30.81.0", "mtu": 1500, "broadcast": "172.30.81.255", "alias": "ens192", "netmask": "255.255.255.0", "address": "172.30.81.91", "interface": "ens192", "type": "ether", "gateway": "172.30.81.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_vethd5f7fec9": {"macaddress": "7e:f3:47:36:60:77", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd5f7fec9", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7cf3:47ff:fe36:6077"}], "active": true, "speed": 10000}, "ansible_vethc3d4398c": {"macaddress": "92:91:d1:72:60:b2", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc3d4398c", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::9091:d1ff:fe72:60b2"}], "active": true, "speed": 10000}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-862.11.6.el7.x86_64", "quiet": true, "vconsole.font": "latarcyrheb-sun16", "rhgb": true, "rd.lvm.lv": "vg01/root", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/vg01-root", "vconsole.keymap": "de"}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_ovs_system": {"macaddress": "d6:5b:c1:07:05:3c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1500, "device": "ovs-system", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_distribution_file_search_string": "Red Hat", "ansible_vethf503a75f": {"macaddress": "ba:10:5f:22:96:5f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf503a75f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b810:5fff:fe22:965f"}], "active": true, "speed": 10000}, "ansible_product_uuid": "422A0783-7155-AC41-2292-5EE89A8FF6FB", "ansible_pkg_mgr": "yum", "ansible_distribution": "RedHat", "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:9ecff6bb752b", "ansible_all_ipv6_addresses": ["fe80::18a3:bcff:fea4:bd73", "fe80::d808:41ff:fee0:5abf", "fe80::fc84:4bff:feaf:b1c7", "fe80::fce5:ccff:fe8f:30bc", "fe80::7cf3:47ff:fe36:6077", "fe80::411:8cff:fedc:56c9", "fe80::e44f:a4ff:fe61:703e", "fe80::3cf9:90ff:fec8:b03", "fe80::88fe:6aff:feaa:3626", "fe80::88fc:cfff:fe38:cb4e", "fe80::9038:b4ff:febc:aa5f", "fe80::b415:d7ff:fe52:9650", "fe80::5834:5aff:fe00:10d1", "fe80::8451:69ff:fe12:d3b4", "fe80::b8e6:f4ff:fe2b:c39f", "fe80::9c77:57ff:fe94:c937", "fe80::643b:83ff:fefa:ab29", "fe80::1c6a:12ff:fe7e:1c29", "fe80::588c:72ff:fec1:8956", "fe80::300e:55ff:fe3c:f77c", "fe80::3448:1ff:fe5e:c3a6", "fe80::ac5d:88ff:fe8b:250d", "fe80::9091:d1ff:fe72:60b2", "fe80::e4c5:f1ff:fe74:af49", "fe80::881b:e6ff:fe5d:515a", "fe80::c465:ecff:fedc:cb94", "fe80::c07b:e8ff:fe31:16ad", "fe80::c805:93ff:fe50:9763", "fe80::582e:f7ff:fe63:4518", "fe80::54a3:22ff:fecd:af30", "fe80::b810:5fff:fe22:965f", "fe80::e43b:91ff:fee9:ced6", "fe80::2464:c8ff:feed:ad6c", "fe80::6404:42ff:fe8f:25a2", "fe80::ac9f:32ff:fef2:d79", "fe80::3449:aeff:febd:a427", "fe80::d01d:68ff:fe29:9a9a", "fe80::644e:e3ff:fece:8dc5", "fe80::eced:dcff:fef8:799f", "fe80::a879:55ff:fe03:c5aa", "fe80::383e:b2ff:febe:3ff1", "fe80::c7d:75ff:fef0:2cdb", "fe80::250:56ff:feaa:4749", "fe80::48b9:e7ff:fe5b:5a79", "fe80::d002:afff:fe77:88be", "fe80::60be:6aff:feb4:2fe8", "fe80::58bf:6fff:fe9a:636c", "fe80::64fb:26ff:fe1e:b569", "fe80::58f4:b1ff:fe3c:cb66", "fe80::fc8b:a5ff:fe18:9ae1", "fe80::a8e1:28ff:fe1b:d0d7", "fe80::b8be:e5ff:fee2:c21e", "fe80::648d:8eff:fe94:b413", "fe80::b0cc:2eff:fecf:3c9f"], "ansible_uptime_seconds": 10164933, "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_user_shell": "/bin/bash", "ansible_vethb0d3d2cc": {"macaddress": "36:48:01:5e:c3:a6", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethb0d3d2cc", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3448:1ff:fe5e:c3a6"}], "active": true, "speed": 10000}, "ansible_vethc3eb2fa2": {"macaddress": "ca:05:93:50:97:63", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc3eb2fa2", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c805:93ff:fe50:9763"}], "active": true, "speed": 10000}, "ansible_vetha29713cf": {"macaddress": "b6:15:d7:52:96:50", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha29713cf", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b415:d7ff:fe52:9650"}], "active": true, "speed": 10000}, "ansible_veth96b8671b": {"macaddress": "c6:65:ec:dc:cb:94", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth96b8671b", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c465:ecff:fedc:cb94"}], "active": true, "speed": 10000}, "ansible_product_serial": "VMware-42 2a 07 83 71 55 ac 41-22 92 5e e8 9a 8f f6 fb", "ansible_form_factor": "Other", "ansible_veth8b673490": {"macaddress": "36:49:ae:bd:a4:27", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth8b673490", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3449:aeff:febd:a427"}], "active": true, "speed": 10000}, "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_local": {"openshift": {"node": {"schedulable": "false", "labels": {"nodeusage": "dev", "region": "primary", "zone": "RZ-LM07"}, "proxy_mode": "iptables", "dns_ip": "172.30.81.91", "bootstrapped": false}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "master": {}, "common": {"portal_net": "172.18.128.0/17", "etcd_runtime": "host", "is_etcd_system_container": false, "deployment_subtype": "basic", "is_master_system_container": false, "is_containerized": false, "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise"}, "cloudprovider": {}}}, "ansible_veth5e58bb35": {"macaddress": "fe:e5:cc:8f:30:bc", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth5e58bb35", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fce5:ccff:fe8f:30bc"}], "active": true, "speed": 10000}, "ansible_vxlan_sys_4789": {"macaddress": "e6:c5:f1:74:af:49", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "off [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65000, "device": "vxlan_sys_4789", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e4c5:f1ff:fe74:af49"}], "active": true, "type": "ether"}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:43:4e:38:9d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "on", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": [], "id": "8000.0242434e389d", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "active": false, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "2", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "3", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "4", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "5", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "6", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "7", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAW6w5Bd2NpYxHPg2FxCHkmkuhTw7ru4HpzU2cCd0XroKluGId1khSvK2nH7cGC5O+sGzHgb2LL1lAOLEZZ6egc=", "ansible_mounts": [{"block_used": 711102, "uuid": "e36965f9-43c4-4739-9fd6-48d5e91ae531", "size_total": 10434990080, "block_total": 2547605, "mount": "/", "block_available": 1836503, "size_available": 7522316288, "fstype": "ext4", "inode_total": 654080, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-root", "inode_used": 57326, "block_size": 4096, "inode_available": 596754}, {"block_used": 70599, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "size_total": 2046640128, "block_total": 499668, "mount": "/boot", "block_available": 429069, "size_available": 1757466624, "fstype": "ext4", "inode_total": 131072, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/sda1", "inode_used": 352, "block_size": 4096, "inode_available": 130720}, {"block_used": 59318, "uuid": "448b53b9-3193-40d6-a9e4-8eea58184ff3", "size_total": 4061331456, "block_total": 991536, "mount": "/home", "block_available": 932218, "size_available": 3818364928, "fstype": "ext4", "inode_total": 256000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-home", "inode_used": 32, "block_size": 4096, "inode_available": 255968}, {"block_used": 598142, "uuid": "00fdfc93-06a0-4049-a073-c5f715f53604", "size_total": 6208094208, "block_total": 1515648, "mount": "/var", "block_available": 917506, "size_available": 3758104576, "fstype": "ext4", "inode_total": 389376, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var", "inode_used": 6743, "block_size": 4096, "inode_available": 382633}, {"block_used": 2962495, "uuid": "fdb049bd-4891-444b-8645-be3db36d3a8c", "size_total": 25295187968, "block_total": 6175583, "mount": "/var/log", "block_available": 3213088, "size_available": 13160808448, "fstype": "ext4", "inode_total": 1536000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var_log", "inode_used": 429, "block_size": 4096, "inode_available": 1535571}, {"block_used": 3849108, "uuid": "938fb6a8-595c-46db-ab53-6833cd6c0c54", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker", "block_available": 4010349, "size_available": 16426389504, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 789557, "block_size": 4096, "inode_available": 14937035}, {"block_used": 3849108, "uuid": "938fb6a8-595c-46db-ab53-6833cd6c0c54", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/containers", "block_available": 4010349, "size_available": 16426389504, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 789557, "block_size": 4096, "inode_available": 14937035}, {"block_used": 3849108, "uuid": "938fb6a8-595c-46db-ab53-6833cd6c0c54", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/overlay2", "block_available": 4010349, "size_available": 16426389504, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 789557, "block_size": 4096, "inode_available": 14937035}], "ansible_system_vendor": "VMware, Inc.", "ansible_lvm": {"pvs": {"/dev/sdd": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sdb1": {"free_g": "0", "size_g": "30.00", "vg": "vg0-docker"}, "/dev/sdc": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sda2": {"free_g": "0", "size_g": "28.00", "vg": "vg01"}}, "lvs": {"swap": {"size_g": "4.00", "vg": "vg01"}, "dockerlv": {"size_g": "30.00", "vg": "vg0-docker"}, "var_log": {"size_g": "23.99", "vg": "vg01"}, "var": {"size_g": "6.00", "vg": "vg01"}, "home": {"size_g": "4.00", "vg": "vg01"}, "root": {"size_g": "10.00", "vg": "vg01"}}, "vgs": {"vg01": {"free_g": "0", "size_g": "47.99", "num_lvs": "5", "num_pvs": "3"}, "vg0-docker": {"free_g": "0", "size_g": "30.00", "num_lvs": "1", "num_pvs": "1"}}}, "ansible_veth322782be": {"macaddress": "ae:9f:32:f2:0d:79", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth322782be", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ac9f:32ff:fef2:d79"}], "active": true, "speed": 10000}, "ansible_virtualization_role": "guest", "ansible_swaptotal_mb": 0, "ansible_veth19366a6f": {"macaddress": "5a:2e:f7:63:45:18", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth19366a6f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::582e:f7ff:fe63:4518"}], "active": true, "speed": 10000}, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_vethcb1957b3": {"macaddress": "1e:6a:12:7e:1c:29", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethcb1957b3", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1c6a:12ff:fe7e:1c29"}], "active": true, "speed": 10000}, "ansible_distribution_major_version": "7", "ansible_vetheab6be8c": {"macaddress": "da:08:41:e0:5a:bf", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetheab6be8c", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d808:41ff:fee0:5abf"}], "active": true, "speed": 10000}, "ansible_effective_group_id": 0, "ansible_real_group_id": 0, "ansible_lsb": {"release": "7.5", "major_release": "7", "codename": "Maipo", "id": "RedHatEnterpriseServer", "description": "Red Hat Enterprise Linux Server release 7.5 (Maipo)"}, "ansible_veth2c813317": {"macaddress": "1a:a3:bc:a4:bd:73", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth2c813317", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::18a3:bcff:fea4:bd73"}], "active": true, "speed": 10000}, "ansible_veth1b8c6288": {"macaddress": "5a:8c:72:c1:89:56", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth1b8c6288", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::588c:72ff:fec1:8956"}], "active": true, "speed": 10000}, "ansible_br0": {"macaddress": "6a:00:90:76:36:49", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "br0", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_vethfe4ab552": {"macaddress": "fe:8b:a5:18:9a:e1", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethfe4ab552", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fc8b:a5ff:fe18:9ae1"}], "active": true, "speed": 10000}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDOi2FwcbhsrxiWVwVFeOv+ClhrR87147c/NSN0XKd8KSdXTyl91MU0eteP8aEezQRLOXwrYGkLEBMvtQxYy/7EHLpej3aYU9WLFK7S2ZcmOtzkX+ufR7QpiJDCKfVgS8L1UhVGXAf4jz4IHES7s+EeK51OHX4IZ+0kItUlZuro4+OFgWeQSPG0S+3DtWEY2dKfw1GGA1Wc1v6Vxgz15UR8J8eFHt0+Am51cQ6T7yYTjEYvqvhIRMeDoKx3OSjsWklQfa6SGG1ky3dWO8ZcBlg6uCpWSVOq6mDEWFk3ZFESJMBb0TifE3BDPrxJIcn4NF0SA0RExxZ5ks5ZSteY+hPr", "ansible_user_gecos": "root", "ansible_ens192": {"macaddress": "00:50:56:aa:47:49", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "on", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:0b:00.0", "module": "vmxnet3", "mtu": 1500, "device": "ens192", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.30.81.255", "netmask": "255.255.255.0", "network": "172.30.81.0", "address": "172.30.81.91"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::250:56ff:feaa:4749"}], "active": true, "speed": 10000, "hw_timestamp_filters": []}, "ansible_processor_threads_per_core": 1, "ansible_vethc54ebf0b": {"macaddress": "e6:4f:a4:61:70:3e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc54ebf0b", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e44f:a4ff:fe61:703e"}], "active": true, "speed": 10000}, "ansible_nodename": "sp-os-node08.os.ad.scanplus.de", "ansible_product_name": "VMware Virtual Platform", "ansible_all_ipv4_addresses": ["172.17.0.1", "172.18.18.1", "172.30.81.91"], "ansible_python_version": "2.7.5", "ansible_product_version": "None", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 15868, "used": 14380, "free": 1488}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 6035, "free": 9833}}, "ansible_user_dir": "/root", "ansible_vethf9ba955b": {"macaddress": "4a:b9:e7:5b:5a:79", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf9ba955b", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::48b9:e7ff:fe5b:5a79"}], "active": true, "speed": 10000}, "gather_subset": ["all"], "ansible_vethd1f5c121": {"macaddress": "ee:ed:dc:f8:79:9f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd1f5c121", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::eced:dcff:fef8:799f"}], "active": true, "speed": 10000}, "ansible_real_user_id": 0, "ansible_vethaad46f3e": {"macaddress": "66:fb:26:1e:b5:69", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethaad46f3e", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::64fb:26ff:fe1e:b569"}], "active": true, "speed": 10000}, "ansible_veth788b4a46": {"macaddress": "0e:7d:75:f0:2c:db", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth788b4a46", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c7d:75ff:fef0:2cdb"}], "active": true, "speed": 10000}, "ansible_dns": {"nameservers": ["172.30.81.91"], "search": ["cluster.local", "os.ad.scanplus.de"]}, "ansible_vethc7f3a46f": {"macaddress": "3a:3e:b2:be:3f:f1", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc7f3a46f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::383e:b2ff:febe:3ff1"}], "active": true, "speed": 10000}, "ansible_veth595446b6": {"macaddress": "62:be:6a:b4:2f:e8", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth595446b6", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::60be:6aff:feb4:2fe8"}], "active": true, "speed": 10000}, "ansible_veth5b87f796": {"macaddress": "5a:bf:6f:9a:63:6c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth5b87f796", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::58bf:6fff:fe9a:636c"}], "active": true, "speed": 10000}, "ansible_memtotal_mb": 15868, "ansible_device_links": {"masters": {"sdd": ["dm-4"], "sdb1": ["dm-5"], "sda2": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "sdc": ["dm-4"]}, "labels": {"dm-2": ["lv_home"], "dm-3": ["lv_var"], "dm-1": ["lv_root"]}, "ids": {"sdb1": ["lvm-pv-uuid-TPd76m-tt2F-dk2z-OLr1-0Q2j-dDRT-pP6ugK"], "sdd": ["lvm-pv-uuid-smRRt5-YS8H-qfal-ejxN-MBmU-cUwk-xqYg7l"], "sr0": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "sda2": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "sdc": ["lvm-pv-uuid-oqR2bN-fhNW-vjps-Zwy2-3vCC-3bS9-jaXtvD"], "dm-4": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "dm-5": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-PJvDHu2D2PbJMXvRAIcjt3tGGBSdwEcXvUfpTIdpplXQCi4seZdf0SyH1iVlo6Fe"], "dm-2": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "dm-3": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "dm-0": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "dm-1": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"]}, "uuids": {"sda1": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"], "dm-4": ["fdb049bd-4891-444b-8645-be3db36d3a8c"], "dm-5": ["938fb6a8-595c-46db-ab53-6833cd6c0c54"], "dm-2": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"], "dm-3": ["00fdfc93-06a0-4049-a073-c5f715f53604"], "dm-0": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"], "dm-1": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_vetha335c89b": {"macaddress": "9e:77:57:94:c9:37", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha335c89b", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::9c77:57ff:fe94:c937"}], "active": true, "speed": 10000}, "ansible_veth11bd5ae1": {"macaddress": "b2:cc:2e:cf:3c:9f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth11bd5ae1", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b0cc:2eff:fecf:3c9f"}], "active": true, "speed": 10000}, "ansible_vethd2f5428b": {"macaddress": "d2:02:af:77:88:be", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd2f5428b", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d002:afff:fe77:88be"}], "active": true, "speed": 10000}, "ansible_veth368a141f": {"macaddress": "06:11:8c:dc:56:c9", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth368a141f", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::411:8cff:fedc:56c9"}], "active": true, "speed": 10000}, "ansible_memfree_mb": 1488, "ansible_veth1b83c69c": {"macaddress": "5a:34:5a:00:10:d1", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth1b83c69c", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::5834:5aff:fe00:10d1"}], "active": true, "speed": 10000}, "ansible_veth5640f114": {"macaddress": "8a:1b:e6:5d:51:5a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth5640f114", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::881b:e6ff:fe5d:515a"}], "active": true, "speed": 10000}, "ansible_veth86500f08": {"macaddress": "8a:fc:cf:38:cb:4e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth86500f08", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::88fc:cfff:fe38:cb4e"}], "active": true, "speed": 10000}, "ansible_processor_count": 8, "ansible_hostname": "sp-os-node08", "ansible_tun0": {"macaddress": "66:04:42:8f:25:a2", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "tun0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.18.19.255", "netmask": "255.255.254.0", "network": "172.18.18.0", "address": "172.18.18.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::6404:42ff:fe8f:25a2"}], "active": true, "type": "ether"}, "ansible_interfaces": ["vethd73b1a32", "veth73f9f5ba", "veth1b8c6288", "ovs-system", "tun0", "vethc54ebf0b", "veth19366a6f", "vetha335c89b", "veth96b8671b", "veth8b673490", "veth5b87f796", "vethd5f7fec9", "veth714ac697", "vethb0d3d2cc", "vethc7f3a46f", "veth322782be", "vethe18084c6", "br0", "lo", "vxlan_sys_4789", "veth4b9b2bbc", "vethfe4ab552", "veth5e58bb35", "veth86500f08", "vethd1f5c121", "veth463956d2", "veth49d4b5bf", "veth43a23823", "vethaad46f3e", "docker0", "veth595446b6", "vethd2f5428b", "vethcb7e1b6f", "vetheab6be8c", "vethcb1957b3", "veth4416882a", "veth11bd5ae1", "vethf9ba955b", "vetha90d8f60", "veth5640f114", "veth788b4a46", "veth1b83c69c", "veth2c813317", "vethf24f6b6d", "veth0bbd3a5a", "veth23686529", "vethf503a75f", "veth091f0722", "vethc3eb2fa2", "vetha29713cf", "vethe52c4202", "veth368a141f", "veth441b063a", "veth945abbd7", "vethc3d4398c", "veth231bc091", "ens192", "vethebde602b"], "ansible_machine_id": "d768f1f16c8043df9d09ccf8ab47a75c", "ansible_fqdn": "sp-os-node08.os.ad.scanplus.de", "ansible_user_gid": 0, "ansible_veth43a23823": {"macaddress": "aa:79:55:03:c5:aa", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth43a23823", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a879:55ff:fe03:c5aa"}], "active": true, "speed": 10000}, "ansible_veth4416882a": {"macaddress": "86:51:69:12:d3:b4", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth4416882a", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8451:69ff:fe12:d3b4"}], "active": true, "speed": 10000}, "ansible_userspace_architecture": "x86_64", "ansible_veth945abbd7": {"macaddress": "32:0e:55:3c:f7:7c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth945abbd7", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::300e:55ff:fe3c:f77c"}], "active": true, "speed": 10000}, "ansible_veth73f9f5ba": {"macaddress": "26:64:c8:ed:ad:6c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth73f9f5ba", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::2464:c8ff:feed:ad6c"}], "active": true, "speed": 10000}, "ansible_domain": "os.ad.scanplus.de", "ansible_vethd73b1a32": {"macaddress": "66:8d:8e:94:b4:13", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethd73b1a32", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::648d:8eff:fe94:b413"}], "active": true, "speed": 10000}, "ansible_vethebde602b": {"macaddress": "92:38:b4:bc:aa:5f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethebde602b", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::9038:b4ff:febc:aa5f"}], "active": true, "speed": 10000}, "ansible_virtualization_type": "VMware", "ansible_veth4b9b2bbc": {"macaddress": "ae:5d:88:8b:25:0d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth4b9b2bbc", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ac5d:88ff:fe8b:250d"}], "active": true, "speed": 10000}, "ansible_veth714ac697": {"macaddress": "ba:be:e5:e2:c2:1e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth714ac697", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::b8be:e5ff:fee2:c21e"}], "active": true, "speed": 10000}, "ansible_processor_cores": 1, "ansible_veth441b063a": {"macaddress": "66:3b:83:fa:ab:29", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth441b063a", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::643b:83ff:fefa:ab29"}], "active": true, "speed": 10000}, "ansible_vethe18084c6": {"macaddress": "66:4e:e3:ce:8d:c5", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethe18084c6", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::644e:e3ff:fece:8dc5"}], "active": true, "speed": 10000}, "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20190109T153929", "tz": "CET", "weeknumber": "01", "hour": "15", "year": "2019", "minute": "39", "tz_offset": "+0100", "month": "01", "epoch": "1547044769", "iso8601_micro": "2019-01-09T14:39:29.655020Z", "weekday": "Wednesday", "time": "15:39:29", "date": "2019-01-09", "iso8601": "2019-01-09T14:39:29Z", "day": "09", "iso8601_basic": "20190109T153929654871", "second": "29"}, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIPST6iLP0h6jJ/0YBLM3CFBq80R/xPc8dw1YpdlXa0Pe", "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_veth23686529": {"macaddress": "3e:f9:90:c8:0b:03", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth23686529", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::3cf9:90ff:fec8:b03"}], "active": true, "speed": 10000}, "ansible_veth091f0722": {"macaddress": "aa:e1:28:1b:d0:d7", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth091f0722", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a8e1:28ff:fe1b:d0d7"}], "active": true, "speed": 10000}, "ansible_vetha90d8f60": {"macaddress": "c2:7b:e8:31:16:ad", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha90d8f60", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c07b:e8ff:fe31:16ad"}], "active": true, "speed": 10000}, "ansible_system": "Linux", "ansible_devices": {"sdd": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-4"], "labels": [], "ids": ["lvm-pv-uuid-smRRt5-YS8H-qfal-ejxN-MBmU-cUwk-xqYg7l"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "10.00 GB"}, "sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "NECVMWar", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: VMware SATA AHCI controller", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "VMware SATA CD00", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "71303168", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "labels": [], "ids": ["lvm-pv-uuid-v5zax9-aivN-kQeO-Gck2-FPFL-D9Je-C5bKcR"], "uuids": []}, "sectors": "58718208", "start": "4196352", "holders": ["vg01-swap", "vg01-root", "vg01-home", "vg01-var", "vg01-var_log"], "size": "28.00 GB"}, "sda1": {"sectorsize": 512, "uuid": "f6cc3b12-c504-4f5c-9a86-18c839370fa4", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["f6cc3b12-c504-4f5c-9a86-18c839370fa4"]}, "sectors": "4194304", "start": "2048", "holders": [], "size": "2.00 GB"}}, "holders": [], "size": "34.00 GB"}, "sdb": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdb1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-5"], "labels": [], "ids": ["lvm-pv-uuid-TPd76m-tt2F-dk2z-OLr1-0Q2j-dDRT-pP6ugK"], "uuids": []}, "sectors": "62912512", "start": "2048", "holders": ["vg0--docker-dockerlv"], "size": "30.00 GB"}}, "holders": [], "size": "30.00 GB"}, "sdc": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-4"], "labels": [], "ids": ["lvm-pv-uuid-oqR2bN-fhNW-vjps-Zwy2-3vCC-3bS9-jaXtvD"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "10.00 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "50307072", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "uuids": ["fdb049bd-4891-444b-8645-be3db36d3a8c"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "23.99 GB"}, "dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62906368", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-PJvDHu2D2PbJMXvRAIcjt3tGGBSdwEcXvUfpTIdpplXQCi4seZdf0SyH1iVlo6Fe"], "uuids": ["938fb6a8-595c-46db-ab53-6833cd6c0c54"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "30.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8380416", "links": {"masters": [], "labels": ["lv_home"], "ids": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "uuids": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "12582912", "links": {"masters": [], "labels": ["lv_var"], "ids": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "uuids": ["00fdfc93-06a0-4049-a073-c5f715f53604"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "6.00 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8396800", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "uuids": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "4.00 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": ["lv_root"], "ids": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"], "uuids": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}}, "ansible_vethf24f6b6d": {"macaddress": "56:a3:22:cd:af:30", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethf24f6b6d", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::54a3:22ff:fecd:af30"}], "active": true, "speed": 10000}, "ansible_user_uid": 0, "ansible_veth231bc091": {"macaddress": "8a:fe:6a:aa:36:26", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth231bc091", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::88fe:6aff:feaa:3626"}], "active": true, "speed": 10000}, "ansible_bios_date": "09/21/2015", "ansible_veth463956d2": {"macaddress": "e6:3b:91:e9:ce:d6", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth463956d2", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::e43b:91ff:fee9:ced6"}], "active": true, "speed": 10000}, "ansible_vethcb7e1b6f": {"macaddress": "d2:1d:68:29:9a:9a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethcb7e1b6f", "promisc": false, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d01d:68ff:fe29:9a9a"}], "active": true, "speed": 10000}, "ansible_bios_version": "6.00", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"]}}\n', '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nUSE OF THIS COMPUTER SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS SYSTEM.\nUNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION.\nEVIDENCE OF UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE, CRIMINAL, OR OTHER ADVERSE ACTION.\nUSE OF THIS SYSTEM CONSTITUTES CONSENT TO MONITORING FOR THESE PURPOSES.\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n') ok: [sp-os-node12.os.ad.scanplus.de] ok: [sp-os-node09.os.ad.scanplus.de] ok: [sp-os-node06.os.ad.scanplus.de] ok: [sp-os-node07.os.ad.scanplus.de] ok: [sp-os-node08.os.ad.scanplus.de] (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"module_setup": true, "ansible_vethc3d784dc": {"macaddress": "62:50:96:5b:23:1c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc3d784dc", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::6050:96ff:fe5b:231c"}], "active": true, "speed": 10000}, "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDTHdktg03vqU3PnTVHaA9WMHupC+FceaMUDt6bTrWQq0753tEsRMypN9KzCd3vS/yoVSzX3vCCYvNQSw5fjOv0=", "ansible_veth560525c5": {"macaddress": "fa:22:c6:58:31:cd", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth560525c5", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f822:c6ff:fe58:31cd"}], "active": true, "speed": 10000}, "ansible_distribution_version": "7.5", "ansible_veth701b2d1c": {"macaddress": "d2:02:87:b0:5e:eb", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth701b2d1c", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::d002:87ff:feb0:5eeb"}], "active": true, "speed": 10000}, "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LANG": "en_US.UTF-8", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/0", "SHLVL": "2", "SSH_CLIENT": "172.30.80.240 52322 22", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "PWD": "/root", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "root", "USER": "root", "MAIL": "/var/mail/root", "HOME": "/root", "SELINUX_LEVEL_REQUESTED": "", "XDG_SESSION_ID": "68724", "_": "/usr/bin/python", "SSH_CONNECTION": "172.30.80.240 52322 172.30.80.234 22"}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "00:50:56:aa:20:66", "network": "172.30.80.0", "mtu": 1500, "broadcast": "172.30.80.255", "alias": "ens192", "netmask": "255.255.255.0", "address": "172.30.80.234", "interface": "ens192", "type": "ether", "gateway": "172.30.80.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-862.11.6.el7.x86_64", "quiet": true, "vconsole.font": "latarcyrheb-sun16", "rhgb": true, "rd.lvm.lv": "vg01/root", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/vg01-root", "vconsole.keymap": "de"}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_ovs_system": {"macaddress": "1a:b9:cd:22:96:26", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1500, "device": "ovs-system", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}, "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "422AB8DD-0A8C-071E-890E-611107AEADCF", "ansible_pkg_mgr": "yum", "ansible_veth102eba8c": {"macaddress": "86:f3:fe:f2:5e:4b", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth102eba8c", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::84f3:feff:fef2:5e4b"}], "active": true, "speed": 10000}, "ansible_veth4e996d72": {"macaddress": "f6:03:4e:35:38:23", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth4e996d72", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f403:4eff:fe35:3823"}], "active": true, "speed": 10000}, "ansible_distribution": "RedHat", "ansible_vethcecbfb92": {"macaddress": "ca:c5:ff:4e:67:c1", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethcecbfb92", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::c8c5:ffff:fe4e:67c1"}], "active": true, "speed": 10000}, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7bdc23f3da9", "ansible_veth6472e7a0": {"macaddress": "2a:84:2a:15:e2:26", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth6472e7a0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::2884:2aff:fe15:e226"}], "active": true, "speed": 10000}, "ansible_all_ipv6_addresses": ["fe80::4499:3cff:feb3:3750", "fe80::6c5e:1eff:fe48:ed6", "fe80::5843:e5ff:fee0:d302", "fe80::9872:4ff:feee:d007", "fe80::a030:d8ff:fe0c:be81", "fe80::84f3:feff:fef2:5e4b", "fe80::f4bd:2bff:fe1f:9d4d", "fe80::f822:c6ff:fe58:31cd", "fe80::6050:96ff:fe5b:231c", "fe80::2078:44ff:feec:1632", "fe80::182b:97ff:fef0:2ffc", "fe80::d002:87ff:feb0:5eeb", "fe80::f403:4eff:fe35:3823", "fe80::2884:2aff:fe15:e226", "fe80::7069:b2ff:fec0:d931", "fe80::6c82:deff:feac:4e08", "fe80::dce0:42ff:feb9:f2ad", "fe80::f869:9ff:febe:2815", "fe80::1c76:86ff:fe76:3f0", "fe80::49a:3aff:fe0e:c764", "fe80::c8c5:ffff:fe4e:67c1", "fe80::705e:aaff:fe22:c84a", "fe80::ac07:74ff:fea4:c71e", "fe80::1cf9:a8ff:fee5:2d63", "fe80::5c9f:43ff:fe03:6ad5", "fe80::f431:baff:fe4d:72be", "fe80::250:56ff:feaa:2066", "fe80::7ce2:a6ff:fee5:69f", "fe80::1cf4:6dff:fe39:cb6c"], "ansible_uptime_seconds": 10164958, "ansible_kernel": "3.10.0-862.11.6.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_user_shell": "/bin/bash", "ansible_product_serial": "VMware-42 2a b8 dd 0a 8c 07 1e-89 0e 61 11 07 ae ad cf", "ansible_form_factor": "Other", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_local": {"openshift": {"node": {"labels": {"region": "primary", "zone": "RZ-LM07"}, "proxy_mode": "iptables", "dns_ip": "172.30.80.234", "bootstrapped": false}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "master": {}, "common": {"portal_net": "172.18.128.0/17", "etcd_runtime": "host", "is_etcd_system_container": false, "deployment_subtype": "basic", "is_master_system_container": false, "is_containerized": false, "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise"}, "cloudprovider": {}}}, "ansible_vxlan_sys_4789": {"macaddress": "22:78:44:ec:16:32", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "off [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65000, "device": "vxlan_sys_4789", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::2078:44ff:feec:1632"}], "active": true, "type": "ether"}, "ansible_veth8e99f1e3": {"macaddress": "9a:72:04:ee:d0:07", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth8e99f1e3", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::9872:4ff:feee:d007"}], "active": true, "speed": 10000}, "ansible_veth9666f720": {"macaddress": "de:e0:42:b9:f2:ad", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth9666f720", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::dce0:42ff:feb9:f2ad"}], "active": true, "speed": 10000}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:18:4a:74:71", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "on", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": [], "id": "8000.0242184a7471", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "active": false, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "2", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "3", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "4", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "5", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "6", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "7", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz"], "ansible_veth44ed6c97": {"macaddress": "46:99:3c:b3:37:50", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth44ed6c97", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4499:3cff:feb3:3750"}], "active": true, "speed": 10000}, "ansible_user_gid": 0, "ansible_system_vendor": "VMware, Inc.", "ansible_swaptotal_mb": 0, "ansible_veth90b8a5ca": {"macaddress": "f6:bd:2b:1f:9d:4d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth90b8a5ca", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f4bd:2bff:fe1f:9d4d"}], "active": true, "speed": 10000}, "ansible_veth6c2a9b38": {"macaddress": "1e:76:86:76:03:f0", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth6c2a9b38", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1c76:86ff:fe76:3f0"}], "active": true, "speed": 10000}, "ansible_distribution_major_version": "7", "ansible_veth416b7428": {"macaddress": "a2:30:d8:0c:be:81", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth416b7428", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a030:d8ff:fe0c:be81"}], "active": true, "speed": 10000}, "ansible_real_group_id": 0, "ansible_lsb": {"release": "7.5", "major_release": "7", "codename": "Maipo", "id": "RedHatEnterpriseServer", "description": "Red Hat Enterprise Linux Server release 7.5 (Maipo)"}, "ansible_veth2ccd4011": {"macaddress": "7e:e2:a6:e5:06:9f", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth2ccd4011", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7ce2:a6ff:fee5:69f"}], "active": true, "speed": 10000}, "ansible_tun0": {"macaddress": "fa:69:09:be:28:15", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "tun0", "promisc": true, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.18.11.255", "netmask": "255.255.254.0", "network": "172.18.10.0", "address": "172.18.10.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f869:9ff:febe:2815"}], "active": true, "type": "ether"}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQCyc56A6kosdPpNHOIseHjhEFe9Ef8+3MQZ7hQ/utdAgyZjJe9tP47Pa/5dVkdmgSfkXksueORcVFAIFweurc3MvSoeo2ZACJQcfyqk5hD87AG3ApctMnUqLKKGkZJWyN130EDDIOBkS53Rde9csoDVTjzSg8xj6mYegQUmc97ks/97WoZajSEJhk1DhjEhrfYQel6JvqhEWG5xQkSs1bPmiPbsqt08c9/nJk5lzGa42+EvTNLsLbcpQn5ygWb5A8Fzu8mDOwEyb1EMrcyaNGQylIGNV5tPdfts0TPfw0OqJ67xVYZOCcoJVOgtB4bsJ/ayGaWXF6OxNr91GyOkHAuj", "ansible_user_gecos": "root", "ansible_veth17a16fb1": {"macaddress": "72:69:b2:c0:d9:31", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth17a16fb1", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::7069:b2ff:fec0:d931"}], "active": true, "speed": 10000}, "ansible_processor_threads_per_core": 1, "ansible_veth6a79c3f6": {"macaddress": "5a:43:e5:e0:d3:02", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth6a79c3f6", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::5843:e5ff:fee0:d302"}], "active": true, "speed": 10000}, "ansible_system": "Linux", "ansible_all_ipv4_addresses": ["172.17.0.1", "172.18.10.1", "172.30.80.234"], "ansible_python_version": "2.7.5", "ansible_vethc57535cb": {"macaddress": "f6:31:ba:4d:72:be", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc57535cb", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::f431:baff:fe4d:72be"}], "active": true, "speed": 10000}, "ansible_product_version": "None", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 15868, "used": 15257, "free": 611}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 4530, "free": 11338}}, "ansible_user_dir": "/root", "gather_subset": ["all"], "ansible_veth7b351fc7": {"macaddress": "1e:f9:a8:e5:2d:63", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth7b351fc7", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1cf9:a8ff:fee5:2d63"}], "active": true, "speed": 10000}, "ansible_real_user_id": 0, "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["172.30.80.234"], "search": ["cluster.local", "os.ad.scanplus.de"]}, "ansible_effective_group_id": 0, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 15868, "ansible_device_links": {"masters": {"sdd": ["dm-4"], "sdc1": ["dm-5"], "sdb1": ["dm-3"], "sda2": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"]}, "labels": {"dm-2": ["lv_home"], "dm-3": ["lv_var"], "dm-1": ["lv_root"]}, "ids": {"sdc1": ["lvm-pv-uuid-gZXZhY-3Nrg-gJnd-4Tz4-KxOz-hecS-kvLW0m"], "sdb1": ["lvm-pv-uuid-umBtSO-qrta-hqA7-ccHx-bMVN-uN9y-eZ8iPd"], "sdd": ["lvm-pv-uuid-l37lLM-Q5wc-yX1c-HPoe-GwoX-HwRJ-GVruyj"], "sr0": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "sda2": ["lvm-pv-uuid-LBc99i-H4dV-cdlN-qAYO-2VHC-ycxB-Fp0y1V"], "dm-4": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "dm-5": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-U427cH1jef8CiuEaC1f0fD7ArcEle1MUtXCRSvnkMp2cskPctGQ1aSjJG3g4fzrv"], "dm-2": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "dm-3": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "dm-0": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "dm-1": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"]}, "uuids": {"sda1": ["3be19171-9fdb-4539-9387-6bdd0564873a"], "dm-4": ["c0e5f752-3494-4b1a-97d4-85ac937e51de"], "dm-5": ["1fd62121-29a1-4453-805f-8687afd0faca"], "dm-2": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"], "dm-3": ["00fdfc93-06a0-4049-a073-c5f715f53604"], "dm-0": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"], "dm-1": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}}, "ansible_veth9cc5030c": {"macaddress": "ae:07:74:a4:c7:1e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth9cc5030c", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::ac07:74ff:fea4:c71e"}], "active": true, "speed": 10000}, "ansible_apparmor": {"status": "disabled"}, "ansible_vethe1880fcb": {"macaddress": "06:9a:3a:0e:c7:64", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethe1880fcb", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::49a:3aff:fe0e:c764"}], "active": true, "speed": 10000}, "ansible_veth5eba2c1e": {"macaddress": "1e:f4:6d:39:cb:6c", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth5eba2c1e", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1cf4:6dff:fe39:cb6c"}], "active": true, "speed": 10000}, "ansible_memfree_mb": 611, "ansible_veth0fb1b0b5": {"macaddress": "5e:9f:43:03:6a:d5", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth0fb1b0b5", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::5c9f:43ff:fe03:6ad5"}], "active": true, "speed": 10000}, "ansible_processor_count": 8, "ansible_hostname": "sp-os-node04", "ansible_interfaces": ["veth7b351fc7", "vethc3d784dc", "veth17a16fb1", "tun0", "veth416b7428", "veth6c2a9b38", "veth44ed6c97", "veth0fb1b0b5", "lo", "vxlan_sys_4789", "veth4e996d72", "veth2ccd4011", "veth5eba2c1e", "veth102eba8c", "veth90b8a5ca", "veth701b2d1c", "veth6a79c3f6", "ovs-system", "vethb07f5c08", "veth6472e7a0", "docker0", "veth9666f720", "br0", "vethc57535cb", "veth8e99f1e3", "vethc3be2ed2", "vetha1936e28", "vethe1880fcb", "vethcecbfb92", "veth644456bb", "veth9cc5030c", "veth560525c5", "ens192"], "ansible_machine_id": "d768f1f16c8043df9d09ccf8ab47a75c", "ansible_veth644456bb": {"macaddress": "72:5e:aa:22:c8:4a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "veth644456bb", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::705e:aaff:fe22:c84a"}], "active": true, "speed": 10000}, "ansible_vethc3be2ed2": {"macaddress": "1a:2b:97:f0:2f:fc", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethc3be2ed2", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::182b:97ff:fef0:2ffc"}], "active": true, "speed": 10000}, "ansible_fqdn": "sp-os-node04.os.ad.scanplus.de", "ansible_mounts": [{"block_used": 912564, "uuid": "e36965f9-43c4-4739-9fd6-48d5e91ae531", "size_total": 29478518784, "block_total": 7196904, "mount": "/", "block_available": 6284340, "size_available": 25740656640, "fstype": "ext4", "inode_total": 1839600, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-root", "inode_used": 43760, "block_size": 4096, "inode_available": 1795840}, {"block_used": 50825, "uuid": "3be19171-9fdb-4539-9387-6bdd0564873a", "size_total": 520794112, "block_total": 127147, "mount": "/boot", "block_available": 76322, "size_available": 312614912, "fstype": "xfs", "inode_total": 512000, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/sda1", "inode_used": 345, "block_size": 4096, "inode_available": 511655}, {"block_used": 462298, "uuid": "00fdfc93-06a0-4049-a073-c5f715f53604", "size_total": 20753092608, "block_total": 5066673, "mount": "/var", "block_available": 4604375, "size_available": 18859520000, "fstype": "ext4", "inode_total": 1289808, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-var", "inode_used": 4618, "block_size": 4096, "inode_available": 1285190}, {"block_used": 2186052, "uuid": "c0e5f752-3494-4b1a-97d4-85ac937e51de", "size_total": 14917042176, "block_total": 3641856, "mount": "/var/log", "block_available": 1455804, "size_available": 5962973184, "fstype": "xfs", "inode_total": 14577664, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/mapper/vg01-var_log", "inode_used": 278, "block_size": 4096, "inode_available": 14577386}, {"block_used": 59318, "uuid": "448b53b9-3193-40d6-a9e4-8eea58184ff3", "size_total": 4061331456, "block_total": 991536, "mount": "/home", "block_available": 932218, "size_available": 3818364928, "fstype": "ext4", "inode_total": 256000, "options": "rw,seclabel,relatime,data=ordered", "device": "/dev/mapper/vg01-home", "inode_used": 33, "block_size": 4096, "inode_available": 255967}, {"block_used": 4912935, "uuid": "1fd62121-29a1-4453-805f-8687afd0faca", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker", "block_available": 2946522, "size_available": 12068954112, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 513274, "block_size": 4096, "inode_available": 15213318}, {"block_used": 4912935, "uuid": "1fd62121-29a1-4453-805f-8687afd0faca", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/containers", "block_available": 2946522, "size_available": 12068954112, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 513274, "block_size": 4096, "inode_available": 15213318}, {"block_used": 4912935, "uuid": "1fd62121-29a1-4453-805f-8687afd0faca", "size_total": 32192335872, "block_total": 7859457, "mount": "/var/lib/docker/overlay2", "block_available": 2946522, "size_available": 12068954112, "fstype": "xfs", "inode_total": 15726592, "options": "rw,seclabel,relatime,attr2,inode64,prjquota,bind", "device": "/dev/mapper/vg0--docker-dockerlv", "inode_used": 513274, "block_size": 4096, "inode_available": 15213318}, {"block_used": 566742, "uuid": "N/A", "size_total": 538869497856, "block_total": 2055624, "mount": "/var/lib/origin/openshift.local.volumes/pods/d2a588be-febb-11e8-b7d6-005056aa3492/volumes/kubernetes.io~nfs/pv-scanplus-netbox-dev-static", "block_available": 1488882, "size_available": 390301483008, "fstype": "nfs4", "inode_total": 33423360, "options": "rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.30.80.234,local_lock=none,addr=172.30.80.251", "device": "172.30.80.251:/exports/netbox/dev/static", "inode_used": 144610, "block_size": 262144, "inode_available": 33278750}, {"block_used": 566742, "uuid": "N/A", "size_total": 538869497856, "block_total": 2055624, "mount": "/var/lib/origin/openshift.local.volumes/pods/d2a588be-febb-11e8-b7d6-005056aa3492/volumes/kubernetes.io~nfs/pv-scanplus-netbox-dev-media", "block_available": 1488882, "size_available": 390301483008, "fstype": "nfs4", "inode_total": 33423360, "options": "rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.30.80.234,local_lock=none,addr=172.30.80.251", "device": "172.30.80.251:/exports/netbox/dev/media", "inode_used": 144610, "block_size": 262144, "inode_available": 33278750}, {"block_used": 566742, "uuid": "N/A", "size_total": 538869497856, "block_total": 2055624, "mount": "/var/lib/origin/openshift.local.volumes/pods/9a8a4dd1-03a8-11e9-b7d6-005056aa3492/volumes/kubernetes.io~nfs/pv-scanplus-netbox-prod-static", "block_available": 1488882, "size_available": 390301483008, "fstype": "nfs4", "inode_total": 33423360, "options": "rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.30.80.234,local_lock=none,addr=172.30.80.251", "device": "172.30.80.251:/exports/netbox/prod/static", "inode_used": 144610, "block_size": 262144, "inode_available": 33278750}, {"block_used": 566742, "uuid": "N/A", "size_total": 538869497856, "block_total": 2055624, "mount": "/var/lib/origin/openshift.local.volumes/pods/9a8a4dd1-03a8-11e9-b7d6-005056aa3492/volumes/kubernetes.io~nfs/pv-scanplus-netbox-prod-media", "block_available": 1488882, "size_available": 390301483008, "fstype": "nfs4", "inode_total": 33423360, "options": "rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.30.80.234,local_lock=none,addr=172.30.80.251", "device": "172.30.80.251:/exports/netbox/prod/media", "inode_used": 144610, "block_size": 262144, "inode_available": 33278750}, {"block_used": 566742, "uuid": "N/A", "size_total": 538869497856, "block_total": 2055624, "mount": "/var/lib/origin/openshift.local.volumes/pods/ac370ed5-13eb-11e9-93ab-005056aa3492/volumes/kubernetes.io~nfs/gerritevents-mongodb", "block_available": 1488882, "size_available": 390301483008, "fstype": "nfs4", "inode_total": 33423360, "options": "rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.30.80.234,local_lock=none,addr=172.30.80.251", "device": "172.30.80.251:/exports/gerritevents/mongodb", "inode_used": 144610, "block_size": 262144, "inode_available": 33278750}], "ansible_nodename": "sp-os-node04.os.ad.scanplus.de", "ansible_distribution_file_search_string": "Red Hat", "ansible_lvm": {"pvs": {"/dev/sdd": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sdb1": {"free_g": "0", "size_g": "10.00", "vg": "vg01"}, "/dev/sda2": {"free_g": "0", "size_g": "49.50", "vg": "vg01"}, "/dev/sdc1": {"free_g": "0", "size_g": "30.00", "vg": "vg0-docker"}}, "lvs": {"swap": {"size_g": "3.91", "vg": "vg01"}, "dockerlv": {"size_g": "30.00", "vg": "vg0-docker"}, "var_log": {"size_g": "13.90", "vg": "vg01"}, "var": {"size_g": "19.76", "vg": "vg01"}, "home": {"size_g": "3.91", "vg": "vg01"}, "root": {"size_g": "28.02", "vg": "vg01"}}, "vgs": {"vg01": {"free_g": "0", "size_g": "69.50", "num_lvs": "5", "num_pvs": "3"}, "vg0-docker": {"free_g": "0", "size_g": "30.00", "num_lvs": "1", "num_pvs": "1"}}}, "ansible_ens192": {"macaddress": "00:50:56:aa:20:66", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "on [fixed]", "tx_vlan_offload": "on", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:0b:00.0", "module": "vmxnet3", "mtu": 1500, "device": "ens192", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "172.30.80.255", "netmask": "255.255.255.0", "network": "172.30.80.0", "address": "172.30.80.234"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::250:56ff:feaa:2066"}], "active": true, "speed": 10000, "hw_timestamp_filters": []}, "ansible_domain": "os.ad.scanplus.de", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIB25sZrAnsIBzTQdJsgx26Sm7OIODAKHOYZ3n60TyWwY", "ansible_processor_cores": 1, "ansible_bios_version": "6.00", "ansible_date_time": {"weekday_number": "3", "iso8601_basic_short": "20190109T153958", "tz": "CET", "weeknumber": "01", "hour": "15", "year": "2019", "minute": "39", "tz_offset": "+0100", "month": "01", "epoch": "1547044798", "iso8601_micro": "2019-01-09T14:39:58.612066Z", "weekday": "Wednesday", "time": "15:39:58", "date": "2019-01-09", "iso8601": "2019-01-09T14:39:58Z", "day": "09", "iso8601_basic": "20190109T153958611951", "second": "58"}, "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_vetha1936e28": {"macaddress": "6e:5e:1e:48:0e:d6", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vetha1936e28", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::6c5e:1eff:fe48:ed6"}], "active": true, "speed": 10000}, "ansible_product_name": "VMware Virtual Platform", "ansible_devices": {"sdd": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": ["dm-4"], "labels": [], "ids": ["lvm-pv-uuid-l37lLM-Q5wc-yX1c-HPoe-GwoX-HwRJ-GVruyj"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {}, "holders": ["vg01-var_log"], "size": "10.00 GB"}, "sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "NECVMWar", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-VMware_Virtual_SATA_CDRW_Drive_00000000000000000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: VMware SATA AHCI controller", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "VMware SATA CD00", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "104857600", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2", "dm-3", "dm-4"], "labels": [], "ids": ["lvm-pv-uuid-LBc99i-H4dV-cdlN-qAYO-2VHC-ycxB-Fp0y1V"], "uuids": []}, "sectors": "103823360", "start": "1026048", "holders": ["vg01-swap", "vg01-root", "vg01-home", "vg01-var", "vg01-var_log"], "size": "49.51 GB"}, "sda1": {"sectorsize": 512, "uuid": "3be19171-9fdb-4539-9387-6bdd0564873a", "links": {"masters": [], "labels": [], "ids": [], "uuids": ["3be19171-9fdb-4539-9387-6bdd0564873a"]}, "sectors": "1024000", "start": "2048", "holders": [], "size": "500.00 MB"}}, "holders": [], "size": "50.00 GB"}, "sdb": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdb1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-3"], "labels": [], "ids": ["lvm-pv-uuid-umBtSO-qrta-hqA7-ccHx-bMVN-uN9y-eZ8iPd"], "uuids": []}, "sectors": "20971457", "start": "63", "holders": ["vg01-var"], "size": "10.00 GB"}}, "holders": [], "size": "10.00 GB"}, "sdc": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "VMware", "sectors": "62914560", "links": {"masters": [], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS1068 PCI-X Fusion-MPT SAS (rev 01)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Virtual disk", "partitions": {"sdc1": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-5"], "labels": [], "ids": ["lvm-pv-uuid-gZXZhY-3Nrg-gJnd-4Tz4-KxOz-hecS-kvLW0m"], "uuids": []}, "sectors": "62912512", "start": "2048", "holders": ["vg0--docker-dockerlv"], "size": "30.00 GB"}}, "holders": [], "size": "30.00 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "29155328", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-var_log", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo675YwZDwpYFkSEtEuJ9udosVsVXAYNMjCP"], "uuids": ["c0e5f752-3494-4b1a-97d4-85ac937e51de"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "13.90 GB"}, "dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "62906368", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg0--docker-dockerlv", "dm-uuid-LVM-U427cH1jef8CiuEaC1f0fD7ArcEle1MUtXCRSvnkMp2cskPctGQ1aSjJG3g4fzrv"], "uuids": ["1fd62121-29a1-4453-805f-8687afd0faca"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "30.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8192000", "links": {"masters": [], "labels": ["lv_home"], "ids": ["dm-name-vg01-home", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo670mhY3X4XKvzOYwp5C1wbcmVee9g2cqJH"], "uuids": ["448b53b9-3193-40d6-a9e4-8eea58184ff3"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "3.91 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "41443328", "links": {"masters": [], "labels": ["lv_var"], "ids": ["dm-name-vg01-var", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67Ci0RkznNVSfygAuT8OblWY1PksvJpIk7"], "uuids": ["00fdfc93-06a0-4049-a073-c5f715f53604"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "19.76 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "8192000", "links": {"masters": [], "labels": [], "ids": ["dm-name-vg01-swap", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo6769rDx5sXsUhCpGplEGozfb9q8xYpHQNi"], "uuids": ["5ad5278d-edb2-4bd9-b665-5ce8d4ea672a"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "3.91 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "58761216", "links": {"masters": [], "labels": ["lv_root"], "ids": ["dm-name-vg01-root", "dm-uuid-LVM-HYOk2XTwnd0ls0trqNJIeWmMknTzZo67rTYa7UY51rOC54OEZY4e9Phmqdvi0FB0"], "uuids": ["e36965f9-43c4-4739-9fd6-48d5e91ae531"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "28.02 GB"}}, "ansible_user_uid": 0, "ansible_bios_date": "09/21/2015", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_vethb07f5c08": {"macaddress": "6e:82:de:ac:4e:08", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1450, "device": "vethb07f5c08", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::6c82:deff:feac:4e08"}], "active": true, "speed": 10000}, "ansible_br0": {"macaddress": "1e:c7:c3:d7:37:4d", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "hw_timestamp_filters": [], "mtu": 1450, "device": "br0", "promisc": true, "timestamping": ["rx_software", "software"], "active": false, "type": "ether"}}}\n', '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nUSE OF THIS COMPUTER SYSTEM, AUTHORIZED OR UNAUTHORIZED, CONSTITUTES CONSENT TO MONITORING OF THIS SYSTEM.\nUNAUTHORIZED USE MAY SUBJECT YOU TO CRIMINAL PROSECUTION.\nEVIDENCE OF UNAUTHORIZED USE COLLECTED DURING MONITORING MAY BE USED FOR ADMINISTRATIVE, CRIMINAL, OR OTHER ADVERSE ACTION.\nUSE OF THIS SYSTEM CONSTITUTES CONSENT TO MONITORING FOR THESE PURPOSES.\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n') ok: [sp-os-node04.os.ad.scanplus.de] META: ran handlers TASK [openshift_sanitize_inventory : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:4 Wednesday 09 January 2019 15:40:00 +0100 (0:00:32.062) 0:00:34.385 ***** statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/__deprecations_logging.yml included: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml for sp-os-master01.os.ad.scanplus.de, sp-os-infra01.os.ad.scanplus.de, sp-os-infra02.os.ad.scanplus.de, sp-os-node02.os.ad.scanplus.de, sp-os-node03.os.ad.scanplus.de, sp-os-node04.os.ad.scanplus.de, sp-os-node05.os.ad.scanplus.de, sp-os-node06.os.ad.scanplus.de, sp-os-node07.os.ad.scanplus.de, sp-os-node08.os.ad.scanplus.de, sp-os-node09.os.ad.scanplus.de, sp-os-node10.os.ad.scanplus.de, sp-os-node11.os.ad.scanplus.de, sp-os-node12.os.ad.scanplus.de TASK [openshift_sanitize_inventory : Check for usage of deprecated variables] *********************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml:4 Wednesday 09 January 2019 15:40:03 +0100 (0:00:03.811) 0:00:38.197 ***** ok: [sp-os-infra01.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } ok: [sp-os-master01.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } ok: [sp-os-infra02.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } ok: [sp-os-node02.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } ok: [sp-os-node03.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } ok: [sp-os-node04.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } ok: [sp-os-node05.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } ok: [sp-os-node06.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } ok: [sp-os-node07.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } ok: [sp-os-node09.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } ok: [sp-os-node08.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } ok: [sp-os-node10.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } ok: [sp-os-node11.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } ok: [sp-os-node12.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [openshift_sanitize_inventory : debug] ********************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml:13 Wednesday 09 January 2019 15:40:07 +0100 (0:00:03.241) 0:00:41.438 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} skipping: [sp-os-infra01.os.ad.scanplus.de] => {} skipping: [sp-os-infra02.os.ad.scanplus.de] => {} skipping: [sp-os-node02.os.ad.scanplus.de] => {} skipping: [sp-os-node03.os.ad.scanplus.de] => {} skipping: [sp-os-node04.os.ad.scanplus.de] => {} skipping: [sp-os-node05.os.ad.scanplus.de] => {} skipping: [sp-os-node06.os.ad.scanplus.de] => {} skipping: [sp-os-node07.os.ad.scanplus.de] => {} skipping: [sp-os-node08.os.ad.scanplus.de] => {} skipping: [sp-os-node09.os.ad.scanplus.de] => {} skipping: [sp-os-node10.os.ad.scanplus.de] => {} skipping: [sp-os-node11.os.ad.scanplus.de] => {} skipping: [sp-os-node12.os.ad.scanplus.de] => {} TASK [openshift_sanitize_inventory : set_stats] ***************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml:14 Wednesday 09 January 2019 15:40:09 +0100 (0:00:01.955) 0:00:43.394 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/__deprecations_logging.yml:10 Wednesday 09 January 2019 15:40:11 +0100 (0:00:02.220) 0:00:45.615 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } ok: [sp-os-infra01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } ok: [sp-os-infra02.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } ok: [sp-os-node02.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } ok: [sp-os-node03.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } ok: [sp-os-node04.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } ok: [sp-os-node05.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } ok: [sp-os-node06.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } ok: [sp-os-node07.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } ok: [sp-os-node08.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } ok: [sp-os-node09.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } ok: [sp-os-node10.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } ok: [sp-os-node11.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } ok: [sp-os-node12.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } TASK [openshift_sanitize_inventory : Standardize on latest variable names] ************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:7 Wednesday 09 January 2019 15:40:13 +0100 (0:00:02.262) 0:00:47.877 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } ok: [sp-os-infra01.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } ok: [sp-os-infra02.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } ok: [sp-os-node02.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } ok: [sp-os-node03.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } ok: [sp-os-node04.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } ok: [sp-os-node05.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } ok: [sp-os-node06.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } ok: [sp-os-node07.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } ok: [sp-os-node08.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } ok: [sp-os-node09.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } ok: [sp-os-node10.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } ok: [sp-os-node11.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } ok: [sp-os-node12.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } TASK [openshift_sanitize_inventory : Normalize openshift_release] *********************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:12 Wednesday 09 January 2019 15:40:15 +0100 (0:00:02.224) 0:00:50.101 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Abort when openshift_release is invalid] *********************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:22 Wednesday 09 January 2019 15:40:18 +0100 (0:00:02.145) 0:00:52.246 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:31 Wednesday 09 January 2019 15:40:20 +0100 (0:00:02.131) 0:00:54.378 ***** included: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml for sp-os-master01.os.ad.scanplus.de, sp-os-infra01.os.ad.scanplus.de, sp-os-infra02.os.ad.scanplus.de, sp-os-node02.os.ad.scanplus.de, sp-os-node03.os.ad.scanplus.de, sp-os-node04.os.ad.scanplus.de, sp-os-node05.os.ad.scanplus.de, sp-os-node06.os.ad.scanplus.de, sp-os-node07.os.ad.scanplus.de, sp-os-node08.os.ad.scanplus.de, sp-os-node09.os.ad.scanplus.de, sp-os-node10.os.ad.scanplus.de, sp-os-node11.os.ad.scanplus.de, sp-os-node12.os.ad.scanplus.de TASK [openshift_sanitize_inventory : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:5 Wednesday 09 January 2019 15:40:24 +0100 (0:00:04.080) 0:00:58.458 ***** TASK [openshift_sanitize_inventory : Ensure that dynamic provisioning is set if using dynamic storage] ********************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:12 Wednesday 09 January 2019 15:40:26 +0100 (0:00:02.375) 0:01:00.834 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] **************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:28 Wednesday 09 January 2019 15:40:28 +0100 (0:00:02.219) 0:01:03.053 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] **************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:41 Wednesday 09 January 2019 15:40:30 +0100 (0:00:02.140) 0:01:05.193 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Check for deprecated prometheus/grafana install] *************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:53 Wednesday 09 January 2019 15:40:33 +0100 (0:00:02.071) 0:01:07.264 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure clusterid is set along with the cloudprovider] ********************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:35 Wednesday 09 January 2019 15:40:35 +0100 (0:00:02.122) 0:01:09.387 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure ansible_service_broker_remove and ansible_service_broker_install are mutually exclusive] **************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:48 Wednesday 09 January 2019 15:40:37 +0100 (0:00:02.138) 0:01:11.525 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure template_service_broker_remove and template_service_broker_install are mutually exclusive] ************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:57 Wednesday 09 January 2019 15:40:39 +0100 (0:00:02.163) 0:01:13.689 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure that all requires vsphere configuration variables are set] ********************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:66 Wednesday 09 January 2019 15:40:41 +0100 (0:00:02.112) 0:01:15.802 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : ensure provider configuration variables are defined] *********************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:83 Wednesday 09 January 2019 15:40:43 +0100 (0:00:02.048) 0:01:17.850 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure removed web console extension variables are not set] **************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:98 Wednesday 09 January 2019 15:40:45 +0100 (0:00:02.136) 0:01:19.987 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure that web console port matches API server port] ********************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:109 Wednesday 09 January 2019 15:40:47 +0100 (0:00:02.149) 0:01:22.137 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : At least one master is schedulable] **************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:119 Wednesday 09 January 2019 15:40:50 +0100 (0:00:02.156) 0:01:24.293 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Detecting Operating System from ostree_booted] ************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:19 Wednesday 09 January 2019 15:40:52 +0100 (0:00:02.225) 0:01:26.518 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') ok: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') ok: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node03.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node04.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node05.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node06.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') ok: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node07.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node08.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node09.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') ok: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node10.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node11.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') ok: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } ok: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } ok: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node12.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') ok: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') ok: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } TASK [set openshift_deployment_type if unset] ******************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:28 Wednesday 09 January 2019 15:40:54 +0100 (0:00:02.437) 0:01:28.956 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [initialize_facts set fact openshift_is_atomic] ************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:35 Wednesday 09 January 2019 15:40:56 +0100 (0:00:02.065) 0:01:31.022 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } ok: [sp-os-infra01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } ok: [sp-os-infra02.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } ok: [sp-os-node02.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } ok: [sp-os-node03.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } ok: [sp-os-node04.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } ok: [sp-os-node05.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } ok: [sp-os-node06.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } ok: [sp-os-node07.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } ok: [sp-os-node08.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } ok: [sp-os-node09.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } ok: [sp-os-node10.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } ok: [sp-os-node11.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } ok: [sp-os-node12.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } TASK [Determine Atomic Host Docker Version] ********************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:51 Wednesday 09 January 2019 15:40:59 +0100 (0:00:02.219) 0:01:33.242 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [assert atomic host docker version is 1.12 or later] ******************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:55 Wednesday 09 January 2019 15:41:01 +0100 (0:00:02.041) 0:01:35.283 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Retrieve existing master configs and validate] ************************************************************************************************************************************************************************************************************************************************************************ META: ran handlers TASK [openshift_control_plane : stat] *************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:3 Wednesday 09 January 2019 15:41:03 +0100 (0:00:02.121) 0:01:37.404 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/etc/origin/master/master-config.yaml", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019889.8290536, "block_size": 4096, "inode": 395378, "isgid": false, "size": 6719, "wgrp": false, "executable": false, "isuid": false, "readable": true, "isreg": true, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 16, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/etc/origin/master/master-config.yaml", "xusr": false, "atime": 1547019890.519067, "isdir": false, "ctime": 1547019889.8290536, "isblk": false, "xgrp": false, "dev": 64769, "roth": true, "isfifo": false, "mode": "0644", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/etc/origin/master/master-config.yaml" } }, "stat": { "atime": 1547019890.519067, "block_size": 4096, "blocks": 16, "ctime": 1547019889.8290536, "dev": 64769, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 395378, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1547019889.8290536, "nlink": 1, "path": "/etc/origin/master/master-config.yaml", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 6719, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [openshift_control_plane : slurp] ************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:10 Wednesday 09 January 2019 15:41:03 +0100 (0:00:00.291) 0:01:37.696 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/net_tools/basics/slurp.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"content": "admissionConfig:
  pluginConfig:
    BuildDefaults:
      configuration:
        apiVersion: v1
        env: []
        kind: BuildDefaultsConfig
        resources:
          limits: {}
          requests: {}
    BuildOverrides:
      configuration:
        apiVersion: v1
        kind: BuildOverridesConfig
    openshift.io/ImagePolicy:
      configuration:
        apiVersion: v1
        executionRules:
        - matchImageAnnotations:
          - key: images.openshift.io/deny-execution
            value: 'true'
          name: execution-denied
          onResources:
          - resource: pods
          - resource: builds
          reject: true
          skipOnResolutionFailure: true
        kind: ImagePolicyConfig
aggregatorConfig:
  proxyClientInfo:
    certFile: aggregator-front-proxy.crt
    keyFile: aggregator-front-proxy.key
apiLevels:
- v1
apiVersion: v1
authConfig:
  requestHeader:
    clientCA: front-proxy-ca.crt
    clientCommonNames:
    - aggregator-front-proxy
    extraHeaderPrefixes:
    - X-Remote-Extra-
    groupHeaders:
    - X-Remote-Group
    usernameHeaders:
    - X-Remote-User
controllerConfig:
  election:
    lockName: openshift-master-controllers
  serviceServingCert:
    signer:
      certFile: service-signer.crt
      keyFile: service-signer.key
controllers: '*'
corsAllowedOrigins:
- (?i)//127\.0\.0\.1(:|\z)
- (?i)//localhost(:|\z)
- (?i)//172\.30\.80\.240(:|\z)
- (?i)//kubernetes\.default(:|\z)
- (?i)//kubernetes\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes(:|\z)
- (?i)//openshift\.default(:|\z)
- (?i)//172\.18\.128\.1(:|\z)
- (?i)//sp\-os\-master01\.os\.ad\.scanplus\.de(:|\z)
- (?i)//openshift\.default\.svc(:|\z)
- (?i)//openshift\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes\.default\.svc(:|\z)
- (?i)//openshift(:|\z)
dnsConfig:
  bindAddress: 0.0.0.0:8053
  bindNetwork: tcp4
etcdClientInfo:
  ca: master.etcd-ca.crt
  certFile: master.etcd-client.crt
  keyFile: master.etcd-client.key
  urls:
  - https://sp-os-master01.os.ad.scanplus.de:2379
etcdStorageConfig:
  kubernetesStoragePrefix: kubernetes.io
  kubernetesStorageVersion: v1
  openShiftStoragePrefix: openshift.io
  openShiftStorageVersion: v1
imageConfig:
  format: registry.redhat.io/openshift3/ose-${component}:${version}
  latest: false
imagePolicyConfig:
  MaxScheduledImageImportsPerMinute: 10
  ScheduledImageImportMinimumIntervalSeconds: 1800
  disableScheduledImport: false
  internalRegistryHostname: docker-registry.default.svc:5000
  maxImagesBulkImportedPerRepository: 3
kind: MasterConfig
kubeletClientInfo:
  ca: ca-bundle.crt
  certFile: master.kubelet-client.crt
  keyFile: master.kubelet-client.key
  port: 10250
kubernetesMasterConfig:
  apiServerArguments:
    runtime-config: []
    storage-backend:
    - etcd3
    storage-media-type:
    - application/vnd.kubernetes.protobuf
  controllerArguments:
    cluster-signing-cert-file:
    - /etc/origin/master/ca.crt
    cluster-signing-key-file:
    - /etc/origin/master/ca.key
    pv-recycler-pod-template-filepath-hostpath:
    - /etc/origin/master/recycler_pod.yaml
    pv-recycler-pod-template-filepath-nfs:
    - /etc/origin/master/recycler_pod.yaml
  masterCount: 1
  masterIP: 172.30.80.240
  podEvictionTimeout: null
  proxyClientInfo:
    certFile: master.proxy-client.crt
    keyFile: master.proxy-client.key
  schedulerArguments: null
  schedulerConfigFile: /etc/origin/master/scheduler.json
  servicesNodePortRange: ''
  servicesSubnet: 172.18.128.0/17
  staticNodeNames: []
masterClients:
  externalKubernetesClientConnectionOverrides:
    acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
    burst: 400
    contentType: application/vnd.kubernetes.protobuf
    qps: 200
  externalKubernetesKubeConfig: ''
  openshiftLoopbackClientConnectionOverrides:
    acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
    burst: 600
    contentType: application/vnd.kubernetes.protobuf
    qps: 300
  openshiftLoopbackKubeConfig: openshift-master.kubeconfig
masterPublicURL: https://os.ad.scanplus.de:8443
networkConfig:
  clusterNetworks:
  - cidr: 172.18.0.0/17
    hostSubnetLength: 9
  externalIPNetworkCIDRs:
  - 0.0.0.0/0
  networkPluginName: redhat/openshift-ovs-multitenant
  serviceNetworkCIDR: 172.18.128.0/17
oauthConfig:
  assetPublicURL: https://os.ad.scanplus.de:8443/console/
  grantConfig:
    method: auto
  identityProviders:
  - challenge: true
    login: true
    name: RH_IPA_LDAP_Auth
    provider:
      apiVersion: v1
      attributes:
        email:
        - mail
        id:
        - sAMAccountName
        name:
        - cn
        preferredUsername:
        - sAMAccountName
      bindDN: CN=osLdapReader,OU=Openshift,OU=ServiceUsers,OU=ScanPlus,DC=ad,DC=scanplus,DC=de
      bindPassword: 3UAL.dMJI4!b
      insecure: true
      kind: LDAPPasswordIdentityProvider
      url: ldap://SP-DC01.ad.scanplus.de/OU=ScanPlus,DC=ad,DC=scanplus,DC=de?sAMAccountName?sub?(memberOf=cn=OpenshiftUsers,OU=Openshift,OU=Groups,OU=ScanPlus,DC=ad,DC=scanplus,DC=de)
  masterCA: ca-bundle.crt
  masterPublicURL: https://os.ad.scanplus.de:8443
  masterURL: https://sp-os-master01.os.ad.scanplus.de:8443
  servingInfo:
    namedCertificates:
    - certFile: /etc/origin/master/named_certificates/cert.crt
      keyFile: /etc/origin/master/named_certificates/cert.key
      names:
      - os.ad.scanplus.de
  sessionConfig:
    sessionMaxAgeSeconds: 3600
    sessionName: ssn
    sessionSecretsFile: /etc/origin/master/session-secrets.yaml
  tokenConfig:
    accessTokenMaxAgeSeconds: 86400
    authorizeTokenMaxAgeSeconds: 500
pauseControllers: false
policyConfig:
  bootstrapPolicyFile: /etc/origin/master/policy.json
  openshiftInfrastructureNamespace: openshift-infra
  openshiftSharedResourcesNamespace: openshift
projectConfig:
  defaultNodeSelector: nodeusage=dev
  projectRequestMessage: ''
  projectRequestTemplate: ''
  securityAllocator:
    mcsAllocatorRange: s0:/2
    mcsLabelsPerProject: 5
    uidAllocatorRange: 1000000000-1999999999/10000
routingConfig:
  subdomain: apps.os.ad.scanplus.de
serviceAccountConfig:
  limitSecretReferences: false
  managedNames:
  - default
  - builder
  - deployer
  masterCA: ca-bundle.crt
  privateKeyFile: serviceaccounts.private.key
  publicKeyFiles:
  - serviceaccounts.public.key
servingInfo:
  bindAddress: 0.0.0.0:8443
  bindNetwork: tcp4
  certFile: master.server.crt
  clientCA: ca.crt
  keyFile: master.server.key
  maxRequestsInFlight: 500
  namedCertificates:
  - certFile: /etc/origin/master/named_certificates/cert.crt
    keyFile: /etc/origin/master/named_certificates/cert.key
    names:
    - os.ad.scanplus.de
  requestTimeoutSeconds: 3600
volumeConfig:
  dynamicProvisioningEnabled: true
", "source": "/etc/origin/master/master-config.yaml", "encoding": "base64", "invocation": {"module_args": {"src": "/etc/origin/master/master-config.yaml"}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "content": "admissionConfig:
  pluginConfig:
    BuildDefaults:
      configuration:
        apiVersion: v1
        env: []
        kind: BuildDefaultsConfig
        resources:
          limits: {}
          requests: {}
    BuildOverrides:
      configuration:
        apiVersion: v1
        kind: BuildOverridesConfig
    openshift.io/ImagePolicy:
      configuration:
        apiVersion: v1
        executionRules:
        - matchImageAnnotations:
          - key: images.openshift.io/deny-execution
            value: 'true'
          name: execution-denied
          onResources:
          - resource: pods
          - resource: builds
          reject: true
          skipOnResolutionFailure: true
        kind: ImagePolicyConfig
aggregatorConfig:
  proxyClientInfo:
    certFile: aggregator-front-proxy.crt
    keyFile: aggregator-front-proxy.key
apiLevels:
- v1
apiVersion: v1
authConfig:
  requestHeader:
    clientCA: front-proxy-ca.crt
    clientCommonNames:
    - aggregator-front-proxy
    extraHeaderPrefixes:
    - X-Remote-Extra-
    groupHeaders:
    - X-Remote-Group
    usernameHeaders:
    - X-Remote-User
controllerConfig:
  election:
    lockName: openshift-master-controllers
  serviceServingCert:
    signer:
      certFile: service-signer.crt
      keyFile: service-signer.key
controllers: '*'
corsAllowedOrigins:
- (?i)//127\.0\.0\.1(:|\z)
- (?i)//localhost(:|\z)
- (?i)//172\.30\.80\.240(:|\z)
- (?i)//kubernetes\.default(:|\z)
- (?i)//kubernetes\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes(:|\z)
- (?i)//openshift\.default(:|\z)
- (?i)//172\.18\.128\.1(:|\z)
- (?i)//sp\-os\-master01\.os\.ad\.scanplus\.de(:|\z)
- (?i)//openshift\.default\.svc(:|\z)
- (?i)//openshift\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes\.default\.svc(:|\z)
- (?i)//openshift(:|\z)
dnsConfig:
  bindAddress: 0.0.0.0:8053
  bindNetwork: tcp4
etcdClientInfo:
  ca: master.etcd-ca.crt
  certFile: master.etcd-client.crt
  keyFile: master.etcd-client.key
  urls:
  - https://sp-os-master01.os.ad.scanplus.de:2379
etcdStorageConfig:
  kubernetesStoragePrefix: kubernetes.io
  kubernetesStorageVersion: v1
  openShiftStoragePrefix: openshift.io
  openShiftStorageVersion: v1
imageConfig:
  format: registry.redhat.io/openshift3/ose-${component}:${version}
  latest: false
imagePolicyConfig:
  MaxScheduledImageImportsPerMinute: 10
  ScheduledImageImportMinimumIntervalSeconds: 1800
  disableScheduledImport: false
  internalRegistryHostname: docker-registry.default.svc:5000
  maxImagesBulkImportedPerRepository: 3
kind: MasterConfig
kubeletClientInfo:
  ca: ca-bundle.crt
  certFile: master.kubelet-client.crt
  keyFile: master.kubelet-client.key
  port: 10250
kubernetesMasterConfig:
  apiServerArguments:
    runtime-config: []
    storage-backend:
    - etcd3
    storage-media-type:
    - application/vnd.kubernetes.protobuf
  controllerArguments:
    cluster-signing-cert-file:
    - /etc/origin/master/ca.crt
    cluster-signing-key-file:
    - /etc/origin/master/ca.key
    pv-recycler-pod-template-filepath-hostpath:
    - /etc/origin/master/recycler_pod.yaml
    pv-recycler-pod-template-filepath-nfs:
    - /etc/origin/master/recycler_pod.yaml
  masterCount: 1
  masterIP: 172.30.80.240
  podEvictionTimeout: null
  proxyClientInfo:
    certFile: master.proxy-client.crt
    keyFile: master.proxy-client.key
  schedulerArguments: null
  schedulerConfigFile: /etc/origin/master/scheduler.json
  servicesNodePortRange: ''
  servicesSubnet: 172.18.128.0/17
  staticNodeNames: []
masterClients:
  externalKubernetesClientConnectionOverrides:
    acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
    burst: 400
    contentType: application/vnd.kubernetes.protobuf
    qps: 200
  externalKubernetesKubeConfig: ''
  openshiftLoopbackClientConnectionOverrides:
    acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
    burst: 600
    contentType: application/vnd.kubernetes.protobuf
    qps: 300
  openshiftLoopbackKubeConfig: openshift-master.kubeconfig
masterPublicURL: https://os.ad.scanplus.de:8443
networkConfig:
  clusterNetworks:
  - cidr: 172.18.0.0/17
    hostSubnetLength: 9
  externalIPNetworkCIDRs:
  - 0.0.0.0/0
  networkPluginName: redhat/openshift-ovs-multitenant
  serviceNetworkCIDR: 172.18.128.0/17
oauthConfig:
  assetPublicURL: https://os.ad.scanplus.de:8443/console/
  grantConfig:
    method: auto
  identityProviders:
  - challenge: true
    login: true
    name: RH_IPA_LDAP_Auth
    provider:
      apiVersion: v1
      attributes:
        email:
        - mail
        id:
        - sAMAccountName
        name:
        - cn
        preferredUsername:
        - sAMAccountName
      bindDN: CN=osLdapReader,OU=Openshift,OU=ServiceUsers,OU=ScanPlus,DC=ad,DC=scanplus,DC=de
      bindPassword: 3UAL.dMJI4!b
      insecure: true
      kind: LDAPPasswordIdentityProvider
      url: ldap://SP-DC01.ad.scanplus.de/OU=ScanPlus,DC=ad,DC=scanplus,DC=de?sAMAccountName?sub?(memberOf=cn=OpenshiftUsers,OU=Openshift,OU=Groups,OU=ScanPlus,DC=ad,DC=scanplus,DC=de)
  masterCA: ca-bundle.crt
  masterPublicURL: https://os.ad.scanplus.de:8443
  masterURL: https://sp-os-master01.os.ad.scanplus.de:8443
  servingInfo:
    namedCertificates:
    - certFile: /etc/origin/master/named_certificates/cert.crt
      keyFile: /etc/origin/master/named_certificates/cert.key
      names:
      - os.ad.scanplus.de
  sessionConfig:
    sessionMaxAgeSeconds: 3600
    sessionName: ssn
    sessionSecretsFile: /etc/origin/master/session-secrets.yaml
  tokenConfig:
    accessTokenMaxAgeSeconds: 86400
    authorizeTokenMaxAgeSeconds: 500
pauseControllers: false
policyConfig:
  bootstrapPolicyFile: /etc/origin/master/policy.json
  openshiftInfrastructureNamespace: openshift-infra
  openshiftSharedResourcesNamespace: openshift
projectConfig:
  defaultNodeSelector: nodeusage=dev
  projectRequestMessage: ''
  projectRequestTemplate: ''
  securityAllocator:
    mcsAllocatorRange: s0:/2
    mcsLabelsPerProject: 5
    uidAllocatorRange: 1000000000-1999999999/10000
routingConfig:
  subdomain: apps.os.ad.scanplus.de
serviceAccountConfig:
  limitSecretReferences: false
  managedNames:
  - default
  - builder
  - deployer
  masterCA: ca-bundle.crt
  privateKeyFile: serviceaccounts.private.key
  publicKeyFiles:
  - serviceaccounts.public.key
servingInfo:
  bindAddress: 0.0.0.0:8443
  bindNetwork: tcp4
  certFile: master.server.crt
  clientCA: ca.crt
  keyFile: master.server.key
  maxRequestsInFlight: 500
  namedCertificates:
  - certFile: /etc/origin/master/named_certificates/cert.crt
    keyFile: /etc/origin/master/named_certificates/cert.key
    names:
    - os.ad.scanplus.de
  requestTimeoutSeconds: 3600
volumeConfig:
  dynamicProvisioningEnabled: true
", "encoding": "base64", "invocation": { "module_args": { "src": "/etc/origin/master/master-config.yaml" } }, "source": "/etc/origin/master/master-config.yaml" } TASK [openshift_control_plane : set_fact] *********************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:17 Wednesday 09 January 2019 15:41:03 +0100 (0:00:00.385) 0:01:38.082 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "l_existing_config_master_config": { "admissionConfig": { "pluginConfig": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } }, "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } }, "openshift.io/ImagePolicy": { "configuration": { "apiVersion": "v1", "executionRules": [ { "matchImageAnnotations": [ { "key": "images.openshift.io/deny-execution", "value": "true" } ], "name": "execution-denied", "onResources": [ { "resource": "pods" }, { "resource": "builds" } ], "reject": true, "skipOnResolutionFailure": true } ], "kind": "ImagePolicyConfig" } } } }, "aggregatorConfig": { "proxyClientInfo": { "certFile": "aggregator-front-proxy.crt", "keyFile": "aggregator-front-proxy.key" } }, "apiLevels": [ "v1" ], "apiVersion": "v1", "authConfig": { "requestHeader": { "clientCA": "front-proxy-ca.crt", "clientCommonNames": [ "aggregator-front-proxy" ], "extraHeaderPrefixes": [ "X-Remote-Extra-" ], "groupHeaders": [ "X-Remote-Group" ], "usernameHeaders": [ "X-Remote-User" ] } }, "controllerConfig": { "election": { "lockName": "openshift-master-controllers" }, "serviceServingCert": { "signer": { "certFile": "service-signer.crt", "keyFile": "service-signer.key" } } }, "controllers": "*", "corsAllowedOrigins": [ "(?i)//127\\.0\\.0\\.1(:|\\z)", "(?i)//localhost(:|\\z)", "(?i)//172\\.30\\.80\\.240(:|\\z)", "(?i)//kubernetes\\.default(:|\\z)", "(?i)//kubernetes\\.default\\.svc\\.cluster\\.local(:|\\z)", "(?i)//kubernetes(:|\\z)", "(?i)//openshift\\.default(:|\\z)", "(?i)//172\\.18\\.128\\.1(:|\\z)", "(?i)//sp\\-os\\-master01\\.os\\.ad\\.scanplus\\.de(:|\\z)", "(?i)//openshift\\.default\\.svc(:|\\z)", "(?i)//openshift\\.default\\.svc\\.cluster\\.local(:|\\z)", "(?i)//kubernetes\\.default\\.svc(:|\\z)", "(?i)//openshift(:|\\z)" ], "dnsConfig": { "bindAddress": "0.0.0.0:8053", "bindNetwork": "tcp4" }, "etcdClientInfo": { "ca": "master.etcd-ca.crt", "certFile": "master.etcd-client.crt", "keyFile": "master.etcd-client.key", "urls": [ "https://sp-os-master01.os.ad.scanplus.de:2379" ] }, "etcdStorageConfig": { "kubernetesStoragePrefix": "kubernetes.io", "kubernetesStorageVersion": "v1", "openShiftStoragePrefix": "openshift.io", "openShiftStorageVersion": "v1" }, "imageConfig": { "format": "registry.redhat.io/openshift3/ose-${component}:${version}", "latest": false }, "imagePolicyConfig": { "MaxScheduledImageImportsPerMinute": 10, "ScheduledImageImportMinimumIntervalSeconds": 1800, "disableScheduledImport": false, "internalRegistryHostname": "docker-registry.default.svc:5000", "maxImagesBulkImportedPerRepository": 3 }, "kind": "MasterConfig", "kubeletClientInfo": { "ca": "ca-bundle.crt", "certFile": "master.kubelet-client.crt", "keyFile": "master.kubelet-client.key", "port": 10250 }, "kubernetesMasterConfig": { "apiServerArguments": { "runtime-config": [], "storage-backend": [ "etcd3" ], "storage-media-type": [ "application/vnd.kubernetes.protobuf" ] }, "controllerArguments": { "cluster-signing-cert-file": [ "/etc/origin/master/ca.crt" ], "cluster-signing-key-file": [ "/etc/origin/master/ca.key" ], "pv-recycler-pod-template-filepath-hostpath": [ "/etc/origin/master/recycler_pod.yaml" ], "pv-recycler-pod-template-filepath-nfs": [ "/etc/origin/master/recycler_pod.yaml" ] }, "masterCount": 1, "masterIP": "172.30.80.240", "podEvictionTimeout": null, "proxyClientInfo": { "certFile": "master.proxy-client.crt", "keyFile": "master.proxy-client.key" }, "schedulerArguments": null, "schedulerConfigFile": "/etc/origin/master/scheduler.json", "servicesNodePortRange": "", "servicesSubnet": "172.18.128.0/17", "staticNodeNames": [] }, "masterClients": { "externalKubernetesClientConnectionOverrides": { "acceptContentTypes": "application/vnd.kubernetes.protobuf,application/json", "burst": 400, "contentType": "application/vnd.kubernetes.protobuf", "qps": 200 }, "externalKubernetesKubeConfig": "", "openshiftLoopbackClientConnectionOverrides": { "acceptContentTypes": "application/vnd.kubernetes.protobuf,application/json", "burst": 600, "contentType": "application/vnd.kubernetes.protobuf", "qps": 300 }, "openshiftLoopbackKubeConfig": "openshift-master.kubeconfig" }, "masterPublicURL": "https://os.ad.scanplus.de:8443", "networkConfig": { "clusterNetworks": [ { "cidr": "172.18.0.0/17", "hostSubnetLength": 9 } ], "externalIPNetworkCIDRs": [ "0.0.0.0/0" ], "networkPluginName": "redhat/openshift-ovs-multitenant", "serviceNetworkCIDR": "172.18.128.0/17" }, "oauthConfig": { "assetPublicURL": "https://os.ad.scanplus.de:8443/console/", "grantConfig": { "method": "auto" }, "identityProviders": [ { "challenge": true, "login": true, "name": "RH_IPA_LDAP_Auth", "provider": { "apiVersion": "v1", "attributes": { "email": [ "mail" ], "id": [ "sAMAccountName" ], "name": [ "cn" ], "preferredUsername": [ "sAMAccountName" ] }, "bindDN": "CN=osLdapReader,OU=Openshift,OU=ServiceUsers,OU=ScanPlus,DC=ad,DC=scanplus,DC=de", "bindPassword": "3UAL.dMJI4!b", "insecure": true, "kind": "LDAPPasswordIdentityProvider", "url": "ldap://SP-DC01.ad.scanplus.de/OU=ScanPlus,DC=ad,DC=scanplus,DC=de?sAMAccountName?sub?(memberOf=cn=OpenshiftUsers,OU=Openshift,OU=Groups,OU=ScanPlus,DC=ad,DC=scanplus,DC=de)" } } ], "masterCA": "ca-bundle.crt", "masterPublicURL": "https://os.ad.scanplus.de:8443", "masterURL": "https://sp-os-master01.os.ad.scanplus.de:8443", "servingInfo": { "namedCertificates": [ { "certFile": "/etc/origin/master/named_certificates/cert.crt", "keyFile": "/etc/origin/master/named_certificates/cert.key", "names": [ "os.ad.scanplus.de" ] } ] }, "sessionConfig": { "sessionMaxAgeSeconds": 3600, "sessionName": "ssn", "sessionSecretsFile": "/etc/origin/master/session-secrets.yaml" }, "tokenConfig": { "accessTokenMaxAgeSeconds": 86400, "authorizeTokenMaxAgeSeconds": 500 } }, "pauseControllers": false, "policyConfig": { "bootstrapPolicyFile": "/etc/origin/master/policy.json", "openshiftInfrastructureNamespace": "openshift-infra", "openshiftSharedResourcesNamespace": "openshift" }, "projectConfig": { "defaultNodeSelector": "nodeusage=dev", "projectRequestMessage": "", "projectRequestTemplate": "", "securityAllocator": { "mcsAllocatorRange": "s0:/2", "mcsLabelsPerProject": 5, "uidAllocatorRange": "1000000000-1999999999/10000" } }, "routingConfig": { "subdomain": "apps.os.ad.scanplus.de" }, "serviceAccountConfig": { "limitSecretReferences": false, "managedNames": [ "default", "builder", "deployer" ], "masterCA": "ca-bundle.crt", "privateKeyFile": "serviceaccounts.private.key", "publicKeyFiles": [ "serviceaccounts.public.key" ] }, "servingInfo": { "bindAddress": "0.0.0.0:8443", "bindNetwork": "tcp4", "certFile": "master.server.crt", "clientCA": "ca.crt", "keyFile": "master.server.key", "maxRequestsInFlight": 500, "namedCertificates": [ { "certFile": "/etc/origin/master/named_certificates/cert.crt", "keyFile": "/etc/origin/master/named_certificates/cert.key", "names": [ "os.ad.scanplus.de" ] } ], "requestTimeoutSeconds": 3600 }, "volumeConfig": { "dynamicProvisioningEnabled": true } } }, "changed": false } TASK [openshift_control_plane : Check for file paths outside of /etc/origin/master in master's config] ********************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:23 Wednesday 09 January 2019 15:41:04 +0100 (0:00:00.219) 0:01:38.301 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "msg": "Aight, configs looking good" } TASK [openshift_control_plane : set_fact] *********************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:28 Wednesday 09 January 2019 15:41:04 +0100 (0:00:00.144) 0:01:38.446 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_master_existing_idproviders": [ { "challenge": true, "login": true, "name": "RH_IPA_LDAP_Auth", "provider": { "apiVersion": "v1", "attributes": { "email": [ "mail" ], "id": [ "sAMAccountName" ], "name": [ "cn" ], "preferredUsername": [ "sAMAccountName" ] }, "bindDN": "CN=osLdapReader,OU=Openshift,OU=ServiceUsers,OU=ScanPlus,DC=ad,DC=scanplus,DC=de", "bindPassword": "3UAL.dMJI4!b", "insecure": true, "kind": "LDAPPasswordIdentityProvider", "url": "ldap://SP-DC01.ad.scanplus.de/OU=ScanPlus,DC=ad,DC=scanplus,DC=de?sAMAccountName?sub?(memberOf=cn=OpenshiftUsers,OU=Openshift,OU=Groups,OU=ScanPlus,DC=ad,DC=scanplus,DC=de)" } } ] }, "changed": false } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:76 Wednesday 09 January 2019 15:41:04 +0100 (0:00:00.159) 0:01:38.605 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:79 Wednesday 09 January 2019 15:41:04 +0100 (0:00:00.146) 0:01:38.751 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "osm_cluster_network_cidr": "172.18.0.0/17", "osm_host_subnet_length": "9" }, "changed": false } META: ran handlers META: ran handlers PLAY [Initialize special first-master variables] **************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:93 Wednesday 09 January 2019 15:41:04 +0100 (0:00:00.146) 0:01:38.898 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_master_config_node_selector": "nodeusage=dev" }, "changed": false } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:102 Wednesday 09 January 2019 15:41:04 +0100 (0:00:00.151) 0:01:39.050 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "first_master_client_binary": "oc", "l_osm_default_node_selector": "nodeusage=dev", "openshift_client_binary": "oc" }, "changed": false } META: ran handlers META: ran handlers PLAY [Disable web console if required] ************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:115 Wednesday 09 January 2019 15:41:04 +0100 (0:00:00.135) 0:01:39.185 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Setup yum repositories for all hosts] ********************************************************************************************************************************************************************************************************************************************************************************* skipping: no hosts matched PLAY [Install packages necessary for installer] ***************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [Determine if chrony is installed] ************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/base_packages.yml:9 Wednesday 09 January 2019 15:41:05 +0100 (0:00:00.126) 0:01:39.312 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Install ntp package] ************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/base_packages.yml:16 Wednesday 09 January 2019 15:41:07 +0100 (0:00:02.037) 0:01:41.349 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Start and enable ntpd/chronyd] **************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/base_packages.yml:26 Wednesday 09 January 2019 15:41:09 +0100 (0:00:02.010) 0:01:43.360 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Ensure openshift-ansible installer package deps are installed] ******************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/base_packages.yml:33 Wednesday 09 January 2019 15:41:11 +0100 (0:00:02.049) 0:01:45.410 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Initialize cluster facts] ********************************************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [get openshift_current_version] **************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:10 Wednesday 09 January 2019 15:41:13 +0100 (0:00:02.134) 0:01:47.545 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py ESTABLISH SSH CONNECTION FOR USER: root ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node03.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.10.34"}}\n', '') (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.10.34"}}\n', '') (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.10.34"}}\n', '') (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.10.34"}}\n', '') (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.11.51"}}\n', '') Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node04.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-infra01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.10.34" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node05.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node02.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.10.34" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.10.34"}}\n', '') Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node06.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-infra02.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.10.34" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node07.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.10.34"}}\n', '') (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.10.34"}}\n', '') ok: [sp-os-node03.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.10.34" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node08.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.11.51" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node09.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.10.34"}}\n', '') ok: [sp-os-node04.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.10.34" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node10.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node05.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.10.34" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.10.34"}}\n', '') (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.10.34"}}\n', '') Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node11.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node06.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.10.34" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } ok: [sp-os-node07.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.10.34" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.10.34"}}\n', '') Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node12.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node08.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.10.34" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } ok: [sp-os-node09.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.10.34" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } ok: [sp-os-node10.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.10.34" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.10.34"}}\n', '') ok: [sp-os-node11.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.10.34" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.10.34"}}\n', '') ok: [sp-os-node12.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.10.34" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } TASK [set_fact openshift_portal_net if present on masters] ****************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:19 Wednesday 09 January 2019 15:41:15 +0100 (0:00:02.574) 0:01:50.120 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } ok: [sp-os-infra01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } ok: [sp-os-infra02.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } ok: [sp-os-node02.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } ok: [sp-os-node03.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } ok: [sp-os-node04.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } ok: [sp-os-node05.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } ok: [sp-os-node06.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } ok: [sp-os-node07.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } ok: [sp-os-node08.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } ok: [sp-os-node09.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } ok: [sp-os-node10.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } ok: [sp-os-node11.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } ok: [sp-os-node12.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } TASK [Gather Cluster facts] ************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:27 Wednesday 09 January 2019 15:41:18 +0100 (0:00:02.493) 0:01:52.613 ***** Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node03.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node04.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node05.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"nodename": "sp-os-infra01.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"region": "infra", "zone": "RZ-LM07"}, "dns_ip": "172.30.80.241", "proxy_mode": "iptables", "bootstrapped": true}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-infra01.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-infra01.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-infra01.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-infra01.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-infra01-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-infra01-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-infra01.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-infra01-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.80.241", "hostname": "sp-os-infra01.os.ad.scanplus.de", "deployment_subtype": "basic", "is_master_system_container": false, "dns_domain": "cluster.local", "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-infra01.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.80.241", "all_hostnames": ["kubernetes.default", "172.30.80.241", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-infra01.os.ad.scanplus.de", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift"], "public_hostname": "sp-os-infra01.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "172.30.80.241", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-infra01.os.ad.scanplus.de", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node06.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"nodename": "sp-os-infra02.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"region": "infra", "zone": "RZ-LM07"}, "dns_ip": "172.30.80.242", "proxy_mode": "iptables", "bootstrapped": false}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-infra02.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-infra02.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-infra02.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-infra02.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-infra02-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-infra02-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-infra02.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-infra02-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.80.242", "hostname": "sp-os-infra02.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "dns_domain": "cluster.local", "is_master_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-infra02.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.80.242", "all_hostnames": ["sp-os-infra02.os.ad.scanplus.de", "kubernetes.default", "kubernetes", "172.30.80.242", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift"], "public_hostname": "sp-os-infra02.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["sp-os-infra02.os.ad.scanplus.de", "kubernetes.default", "kubernetes", "172.30.80.242", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"dns_ip": "172.30.80.240", "proxy_mode": "iptables", "nodename": "sp-os-master01.os.ad.scanplus.de", "bootstrapped": true, "sdn_mtu": "1450"}, "builddefaults": {"config": {"BuildDefaults": {"configuration": {"kind": "BuildDefaultsConfig", "resources": {"requests": {}, "limits": {}}, "env": [], "apiVersion": "v1"}}}}, "logging": {"elasticsearch": {"pvc": {}, "ops": {"pvc": {}}}}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "builddefaults", "logging", "cloudprovider", "master", "hosted", "docker", "buildoverrides"]}, "master": {"public_console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "api_port": "8443", "console_port": "8443", "loopback_user": "system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443", "api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "api_use_ssl": true, "console_path": "/console", "sdn_cluster_network_cidr": "172.18.0.0/17", "loopback_context_name": "default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master", "console_use_ssl": true, "console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "cluster_method": "native", "ha": false, "loopback_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "public_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "admission_plugin_config": {"BuildDefaults": {"configuration": {"kind": "BuildDefaultsConfig", "resources": {"requests": {}, "limits": {}}, "env": [], "apiVersion": "v1"}}, "BuildOverrides": {"configuration": {"kind": "BuildOverridesConfig", "apiVersion": "v1"}}, "openshift.io/ImagePolicy": {"configuration": {"kind": "ImagePolicyConfig", "executionRules": [{"skipOnResolutionFailure": true, "matchImageAnnotations": [{"key": "images.openshift.io/deny-execution", "value": "true"}], "reject": true, "name": "execution-denied", "onResources": [{"resource": "pods"}, {"resource": "builds"}]}], "apiVersion": "v1"}}}, "named_certificates": [{"certfile": "/etc/origin/master/named_certificates/cert.crt", "keyfile": "/etc/origin/master/named_certificates/cert.key", "names": ["sp-os-master01.os.ad.scanplus.de"], "cafile": "/etc/origin/master/named_certificates/ca.crt"}], "manage_htpasswd": true, "loopback_cluster_name": "sp-os-master01-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "controllers_port": "8444", "session_name": "ssn"}, "common": {"is_etcd_system_container": false, "ip": "172.30.80.240", "dns_domain": "cluster.local", "is_master_system_container": false, "public_ip": "172.30.80.240", "public_hostname": "sp-os-master01.os.ad.scanplus.de", "internal_hostnames": ["kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "etcd_runtime": "host", "rolling_restart_mode": "services", "hostname": "sp-os-master01.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "is_openvswitch_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "kube_svc_ip": "172.18.128.1", "config_base": "/etc/origin", "all_hostnames": ["kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "is_containerized": false, "no_proxy_etcd_host_ips": "172.30.80.240", "raw_hostname": "sp-os-master01.os.ad.scanplus.de", "portal_net": "172.18.128.0/17", "deployment_type": "openshift-enterprise"}, "hosted": {"templates": {"kubeconfig": "/tmp/openshift-ansible-DNTbe3/admin.kubeconfig"}, "routers": [{"name": "router", "certificate": "{{ openshift_hosted_router_certificate | default({}) }}", "replicas": "{{ replicas | default(1) }}", "serviceaccount": "router", "namespace": "default", "stats_port": 1936, "edits": "{{ openshift_hosted_router_edits }}", "images": "{{ openshift_hosted_router_image | default(None) }}", "selector": "{{ openshift_hosted_router_selector | default(None) }}", "ports": ["80:80", "443:443"]}], "infra": {"selector": "region=infra"}, "registry": {"force": [false], "name": "docker-registry", "serviceaccount": "registry", "edits": [{"action": "put", "value": {"updatePeriodSeconds": 1, "timeoutSeconds": 600, "maxSurge": "25%", "intervalSeconds": 1, "maxUnavailable": "25%"}, "key": "spec.strategy.rollingParams"}], "selector": "region=infra", "cert": {"expire": {"days": 730}}, "env": {"vars": {}}, "volumes": [], "registryurl": "openshift3/ose-${component}:${version}", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}"}, "router": {"certificate": {"certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key", "cafile": "/etc/origin/master/ca.crt"}, "create_certificate": true, "image": "openshift3/ose-${component}:${version}", "selector": "region=infra", "edits": [{"action": "put", "value": 1, "key": "spec.strategy.rollingParams.intervalSeconds"}, {"action": "put", "value": 1, "key": "spec.strategy.rollingParams.updatePeriodSeconds"}, {"action": "put", "value": 21600, "key": "spec.strategy.activeDeadlineSeconds"}], "registryurl": "openshift3/ose-${component}:${version}", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}"}, "docker": {"registry": {"insecure": {"default": "{{ openshift_docker_hosted_registry_insecure | default(False) }}"}}}, "wfp": {"rc": {"phase": {"msg": "All items completed", "changed": true, "results": [{"_ansible_parsed": true, "stderr_lines": [], "rc": 0, "_ansible_item_result": true, "end": "2018-01-31 14:15:11.698797", "_ansible_no_log": false, "stdout": "Complete", "cmd": ["oc", "get", "replicationcontroller", "router-1", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .metadata.annotations.openshift\\\\.io/deployment\\\\.phase }"], "attempts": 1, "item": [{"name": "router", "certificate": {"keyfile": "/etc/origin/master/openshift-router.key", "certfile": "/etc/origin/master/openshift-router.crt", "cafile": "/etc/origin/master/ca.crt"}, "replicas": "2", "namespace": "default", "serviceaccount": "router", "stats_port": 1936, "edits": [{"action": "put", "value": 1, "key": "spec.strategy.rollingParams.intervalSeconds"}, {"action": "put", "value": 1, "key": "spec.strategy.rollingParams.updatePeriodSeconds"}, {"action": "put", "value": 21600, "key": "spec.strategy.activeDeadlineSeconds"}], "images": "openshift3/ose-${component}:${version}", "selector": "region=infra", "ports": ["80:80", "443:443"]}, {"_ansible_parsed": true, "stderr_lines": [], "_ansible_item_result": true, "end": "2018-01-31 14:15:11.096068", "_ansible_no_log": false, "stdout": "1", "cmd": ["oc", "get", "deploymentconfig", "router", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .status.latestVersion }"], "rc": 0, "item": {"name": "router", "certificate": {"certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key", "cafile": "/etc/origin/master/ca.crt"}, "replicas": "2", "namespace": "default", "serviceaccount": "router", "selector": "region=infra", "edits": [{"action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600}], "images": "openshift3/ose-${component}:${version}", "stats_port": 1936, "ports": ["80:80", "443:443"]}, "delta": "0:00:00.196315", "stderr": "", "changed": true, "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc get deploymentconfig router --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath=\'{ .status.latestVersion }\'", "removes": null, "creates": null, "chdir": null, "stdin": null}}, "stdout_lines": ["1"], "start": "2018-01-31 14:15:10.899753", "_ansible_ignore_errors": null, "failed": false}], "delta": "0:00:00.199963", "stderr": "", "changed": true, "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": false, "_raw_params": "oc get replicationcontroller router-1 --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath=\'{ .metadata.annotations.openshift\\\\.io/deployment\\\\.phase }\'", "removes": null, "warn": true, "chdir": null, "stdin": null}}, "stdout_lines": ["Complete"], "failed_when_result": false, "start": "2018-01-31 14:15:11.498834", "_ansible_ignore_errors": null, "failed": false}]}}}}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "buildoverrides": {"config": {"BuildOverrides": {"configuration": {"kind": "BuildOverridesConfig", "apiVersion": "v1"}}}}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-infra01.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "172.30.80.241", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-infra01.os.ad.scanplus.de", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-infra01.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "172.30.80.241", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-infra01.os.ad.scanplus.de", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift" ], "ip": "172.30.80.241", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-infra01.os.ad.scanplus.de", "public_ip": "172.30.80.241", "raw_hostname": "sp-os-infra01.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-infra01.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-infra01.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-infra01.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-infra01-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-infra01-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-infra01-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-infra01.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-infra01.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.30.80.241", "labels": { "region": "infra", "zone": "RZ-LM07" }, "nodename": "sp-os-infra01.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node07.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-infra02.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "sp-os-infra02.os.ad.scanplus.de", "kubernetes.default", "kubernetes", "172.30.80.242", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-infra02.os.ad.scanplus.de", "internal_hostnames": [ "sp-os-infra02.os.ad.scanplus.de", "kubernetes.default", "kubernetes", "172.30.80.242", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift" ], "ip": "172.30.80.242", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-infra02.os.ad.scanplus.de", "public_ip": "172.30.80.242", "raw_hostname": "sp-os-infra02.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-infra02.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-infra02.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-infra02.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-infra02-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-infra02-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-infra02-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-infra02.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-infra02.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": false, "dns_ip": "172.30.80.242", "labels": { "region": "infra", "zone": "RZ-LM07" }, "nodename": "sp-os-infra02.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node08.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "builddefaults": { "config": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } } } }, "buildoverrides": { "config": { "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } } } }, "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-master01.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "ip": "172.30.80.240", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "no_proxy_etcd_host_ips": "172.30.80.240", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-master01.os.ad.scanplus.de", "public_ip": "172.30.80.240", "raw_hostname": "sp-os-master01.os.ad.scanplus.de", "rolling_restart_mode": "services", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "builddefaults", "logging", "cloudprovider", "master", "hosted", "docker", "buildoverrides" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "hosted": { "docker": { "registry": { "insecure": { "default": "{{ openshift_docker_hosted_registry_insecure | default(False) }}" } } }, "infra": { "selector": "region=infra" }, "registry": { "cert": { "expire": { "days": 730 } }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams", "value": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 } } ], "env": { "vars": {} }, "force": [ false ], "name": "docker-registry", "registryurl": "openshift3/ose-${component}:${version}", "selector": "region=infra", "serviceaccount": "registry", "volumes": [], "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}" }, "router": { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "create_certificate": true, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "image": "openshift3/ose-${component}:${version}", "registryurl": "openshift3/ose-${component}:${version}", "selector": "region=infra", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}" }, "routers": [ { "certificate": "{{ openshift_hosted_router_certificate | default({}) }}", "edits": "{{ openshift_hosted_router_edits }}", "images": "{{ openshift_hosted_router_image | default(None) }}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "{{ replicas | default(1) }}", "selector": "{{ openshift_hosted_router_selector | default(None) }}", "serviceaccount": "router", "stats_port": 1936 } ], "templates": { "kubeconfig": "/tmp/openshift-ansible-DNTbe3/admin.kubeconfig" }, "wfp": { "rc": { "phase": { "changed": true, "msg": "All items completed", "results": [ { "_ansible_ignore_errors": null, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "attempts": 1, "changed": true, "cmd": [ "oc", "get", "replicationcontroller", "router-1", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .metadata.annotations.openshift\\.io/deployment\\.phase }" ], "delta": "0:00:00.199963", "end": "2018-01-31 14:15:11.698797", "failed": false, "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc get replicationcontroller router-1 --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath='{ .metadata.annotations.openshift\\.io/deployment\\.phase }'", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": [ { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "images": "openshift3/ose-${component}:${version}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "2", "selector": "region=infra", "serviceaccount": "router", "stats_port": 1936 }, { "_ansible_ignore_errors": null, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": true, "cmd": [ "oc", "get", "deploymentconfig", "router", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .status.latestVersion }" ], "delta": "0:00:00.196315", "end": "2018-01-31 14:15:11.096068", "failed": false, "invocation": { "module_args": { "_raw_params": "oc get deploymentconfig router --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath='{ .status.latestVersion }'", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "images": "openshift3/ose-${component}:${version}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "2", "selector": "region=infra", "serviceaccount": "router", "stats_port": 1936 }, "rc": 0, "start": "2018-01-31 14:15:10.899753", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": [ "1" ] } ], "rc": 0, "start": "2018-01-31 14:15:11.498834", "stderr": "", "stderr_lines": [], "stdout": "Complete", "stdout_lines": [ "Complete" ] } ] } } } }, "logging": { "elasticsearch": { "ops": { "pvc": {} }, "pvc": {} } }, "master": { "admission_plugin_config": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } }, "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } }, "openshift.io/ImagePolicy": { "configuration": { "apiVersion": "v1", "executionRules": [ { "matchImageAnnotations": [ { "key": "images.openshift.io/deny-execution", "value": "true" } ], "name": "execution-denied", "onResources": [ { "resource": "pods" }, { "resource": "builds" } ], "reject": true, "skipOnResolutionFailure": true } ], "kind": "ImagePolicyConfig" } } }, "api_port": "8443", "api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "cluster_method": "native", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "ha": false, "loopback_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-master01-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443", "manage_htpasswd": true, "named_certificates": [ { "cafile": "/etc/origin/master/named_certificates/ca.crt", "certfile": "/etc/origin/master/named_certificates/cert.crt", "keyfile": "/etc/origin/master/named_certificates/cert.key", "names": [ "sp-os-master01.os.ad.scanplus.de" ] } ], "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "sdn_cluster_network_cidr": "172.18.0.0/17", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.30.80.240", "nodename": "sp-os-master01.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"nodename": "sp-os-node03.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"region": "primary", "zone": "RZ-LM07"}, "dns_ip": "172.30.80.233", "proxy_mode": "iptables", "bootstrapped": true}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node03.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node03.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node03.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node03.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node03-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node03-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node03.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node03-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.80.233", "hostname": "sp-os-node03.os.ad.scanplus.de", "deployment_subtype": "basic", "is_master_system_container": false, "dns_domain": "cluster.local", "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node03.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.80.233", "all_hostnames": ["172.30.80.233", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-node03.os.ad.scanplus.de", "kubernetes.default.svc", "openshift"], "public_hostname": "sp-os-node03.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["172.30.80.233", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-node03.os.ad.scanplus.de", "kubernetes.default.svc", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node09.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node03.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "172.30.80.233", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-node03.os.ad.scanplus.de", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node03.os.ad.scanplus.de", "internal_hostnames": [ "172.30.80.233", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-node03.os.ad.scanplus.de", "kubernetes.default.svc", "openshift" ], "ip": "172.30.80.233", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node03.os.ad.scanplus.de", "public_ip": "172.30.80.233", "raw_hostname": "sp-os-node03.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node03.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node03.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node03.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node03-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node03-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node03-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node03.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node03.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.30.80.233", "labels": { "region": "primary", "zone": "RZ-LM07" }, "nodename": "sp-os-node03.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node10.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node11.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node12.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node09.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75"}, "dns_ip": "172.29.80.170", "proxy_mode": "iptables", "bootstrapped": true}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node09.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node09.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node09.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node09.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node09-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node09-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node09.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node09-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.29.80.170", "hostname": "sp-os-node09.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "dns_domain": "cluster.local", "is_master_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node09.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.29.80.170", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node09.os.ad.scanplus.de", "172.29.80.170", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "public_hostname": "sp-os-node09.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node09.os.ad.scanplus.de", "172.29.80.170", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node05.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "dev", "region": "primary", "zone": "RZ-LM07"}, "dns_ip": "172.30.81.88", "proxy_mode": "iptables", "bootstrapped": true}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node05.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node05.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node05.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node05.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node05-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node05-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node05.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node05-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.81.88", "hostname": "sp-os-node05.os.ad.scanplus.de", "deployment_subtype": "basic", "is_master_system_container": false, "dns_domain": "cluster.local", "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node05.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.81.88", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.30.81.88", "kubernetes.default.svc", "openshift", "sp-os-node05.os.ad.scanplus.de"], "public_hostname": "sp-os-node05.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.30.81.88", "kubernetes.default.svc", "openshift", "sp-os-node05.os.ad.scanplus.de"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node09.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node09.os.ad.scanplus.de", "172.29.80.170", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node09.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node09.os.ad.scanplus.de", "172.29.80.170", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "ip": "172.29.80.170", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node09.os.ad.scanplus.de", "public_ip": "172.29.80.170", "raw_hostname": "sp-os-node09.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node09.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node09.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node09.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node09-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node09-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node09-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node09.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node09.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.29.80.170", "labels": { "nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75" }, "nodename": "sp-os-node09.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } ok: [sp-os-node05.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.30.81.88", "kubernetes.default.svc", "openshift", "sp-os-node05.os.ad.scanplus.de" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node05.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.30.81.88", "kubernetes.default.svc", "openshift", "sp-os-node05.os.ad.scanplus.de" ], "ip": "172.30.81.88", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node05.os.ad.scanplus.de", "public_ip": "172.30.81.88", "raw_hostname": "sp-os-node05.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node05.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node05.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node05.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node05-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node05-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node05-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node05.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node05.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.30.81.88", "labels": { "nodeusage": "dev", "region": "primary", "zone": "RZ-LM07" }, "nodename": "sp-os-node05.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"nodename": "sp-os-node02.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"region": "primary", "zone": "RZ-LM07"}, "dns_ip": "172.30.80.244", "proxy_mode": "iptables", "bootstrapped": false}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node02.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node02.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node02.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node02.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node02-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node02-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node02.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node02-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.80.244", "hostname": "sp-os-node02.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "dns_domain": "cluster.local", "is_master_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node02.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.80.244", "all_hostnames": ["sp-os-node02.os.ad.scanplus.de", "172.30.80.244", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "public_hostname": "sp-os-node02.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["sp-os-node02.os.ad.scanplus.de", "172.30.80.244", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node10.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75"}, "dns_ip": "172.29.80.171", "proxy_mode": "iptables", "bootstrapped": false}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node10.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node10.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node10.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node10.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node10-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node10-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node10.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node10-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.29.80.171", "hostname": "sp-os-node10.os.ad.scanplus.de", "deployment_subtype": "basic", "is_master_system_container": false, "dns_domain": "cluster.local", "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node10.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.29.80.171", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node10.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.171", "kubernetes.default.svc", "openshift"], "public_hostname": "sp-os-node10.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node10.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.171", "kubernetes.default.svc", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node02.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "sp-os-node02.os.ad.scanplus.de", "172.30.80.244", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node02.os.ad.scanplus.de", "internal_hostnames": [ "sp-os-node02.os.ad.scanplus.de", "172.30.80.244", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "ip": "172.30.80.244", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node02.os.ad.scanplus.de", "public_ip": "172.30.80.244", "raw_hostname": "sp-os-node02.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node02.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node02.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node02.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node02-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node02-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node02-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node02.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node02.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": false, "dns_ip": "172.30.80.244", "labels": { "region": "primary", "zone": "RZ-LM07" }, "nodename": "sp-os-node02.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node06.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "dev", "region": "primary", "zone": "RZ-LM07"}, "dns_ip": "172.30.81.89", "proxy_mode": "iptables", "bootstrapped": false}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node06.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node06.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node06.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node06.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node06-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node06-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node06.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node06-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.81.89", "hostname": "sp-os-node06.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "dns_domain": "cluster.local", "is_master_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node06.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.81.89", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node06.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.30.81.89", "openshift"], "public_hostname": "sp-os-node06.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node06.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.30.81.89", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node10.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node10.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.171", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node10.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node10.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.171", "kubernetes.default.svc", "openshift" ], "ip": "172.29.80.171", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node10.os.ad.scanplus.de", "public_ip": "172.29.80.171", "raw_hostname": "sp-os-node10.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node10.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node10.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node10.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node10-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node10-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node10-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node10.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node10.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": false, "dns_ip": "172.29.80.171", "labels": { "nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75" }, "nodename": "sp-os-node10.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } ok: [sp-os-node06.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node06.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.30.81.89", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node06.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node06.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.30.81.89", "openshift" ], "ip": "172.30.81.89", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node06.os.ad.scanplus.de", "public_ip": "172.30.81.89", "raw_hostname": "sp-os-node06.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node06.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node06.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node06.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node06-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node06-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node06-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node06.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node06.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": false, "dns_ip": "172.30.81.89", "labels": { "nodeusage": "dev", "region": "primary", "zone": "RZ-LM07" }, "nodename": "sp-os-node06.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node11.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75"}, "dns_ip": "172.29.80.172", "proxy_mode": "iptables", "bootstrapped": true}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node11.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node11.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node11.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node11.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node11-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node11-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node11.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node11-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.29.80.172", "hostname": "sp-os-node11.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "dns_domain": "cluster.local", "is_master_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node11.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.29.80.172", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-node11.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.29.80.172", "kubernetes.default.svc", "openshift"], "public_hostname": "sp-os-node11.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-node11.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.29.80.172", "kubernetes.default.svc", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node12.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75"}, "dns_ip": "172.29.80.173", "proxy_mode": "iptables", "bootstrapped": false}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node12.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node12.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node12.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node12.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node12-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node12-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node12.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node12-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.29.80.173", "hostname": "sp-os-node12.os.ad.scanplus.de", "deployment_subtype": "basic", "is_master_system_container": false, "dns_domain": "cluster.local", "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node12.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.29.80.173", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node12.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.173", "openshift"], "public_hostname": "sp-os-node12.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node12.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.173", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node11.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-node11.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.29.80.172", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node11.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-node11.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.29.80.172", "kubernetes.default.svc", "openshift" ], "ip": "172.29.80.172", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node11.os.ad.scanplus.de", "public_ip": "172.29.80.172", "raw_hostname": "sp-os-node11.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node11.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node11.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node11.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node11-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node11-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node11-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node11.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node11.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.29.80.172", "labels": { "nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75" }, "nodename": "sp-os-node11.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } ok: [sp-os-node12.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node12.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.173", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node12.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node12.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.173", "openshift" ], "ip": "172.29.80.173", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node12.os.ad.scanplus.de", "public_ip": "172.29.80.173", "raw_hostname": "sp-os-node12.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node12.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node12.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node12.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node12-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node12-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node12-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node12.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node12.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": false, "dns_ip": "172.29.80.173", "labels": { "nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75" }, "nodename": "sp-os-node12.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node07.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "dev", "region": "primary", "zone": "RZ-LM07"}, "dns_ip": "172.30.81.90", "proxy_mode": "iptables", "bootstrapped": true}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node07.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node07.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node07.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node07.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node07-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node07-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node07.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node07-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.81.90", "hostname": "sp-os-node07.os.ad.scanplus.de", "deployment_subtype": "basic", "is_master_system_container": false, "dns_domain": "cluster.local", "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node07.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.81.90", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.90", "sp-os-node07.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "public_hostname": "sp-os-node07.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.90", "sp-os-node07.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node07.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.90", "sp-os-node07.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node07.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.90", "sp-os-node07.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "ip": "172.30.81.90", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node07.os.ad.scanplus.de", "public_ip": "172.30.81.90", "raw_hostname": "sp-os-node07.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node07.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node07.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node07.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node07-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node07-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node07-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node07.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node07.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.30.81.90", "labels": { "nodeusage": "dev", "region": "primary", "zone": "RZ-LM07" }, "nodename": "sp-os-node07.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node08.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "dev", "region": "primary", "zone": "RZ-LM07"}, "dns_ip": "172.30.81.91", "proxy_mode": "iptables", "bootstrapped": false}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node08.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node08.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node08.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node08.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node08-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node08-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node08.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node08-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.81.91", "hostname": "sp-os-node08.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "dns_domain": "cluster.local", "is_master_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node08.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.81.91", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.91", "openshift.default.svc", "sp-os-node08.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "public_hostname": "sp-os-node08.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.91", "openshift.default.svc", "sp-os-node08.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node08.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.91", "openshift.default.svc", "sp-os-node08.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node08.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.91", "openshift.default.svc", "sp-os-node08.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "ip": "172.30.81.91", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node08.os.ad.scanplus.de", "public_ip": "172.30.81.91", "raw_hostname": "sp-os-node08.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node08.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node08.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node08.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node08-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node08-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node08-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node08.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node08.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": false, "dns_ip": "172.30.81.91", "labels": { "nodeusage": "dev", "region": "primary", "zone": "RZ-LM07" }, "nodename": "sp-os-node08.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"nodename": "sp-os-node04.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"region": "primary", "zone": "RZ-LM07"}, "dns_ip": "172.30.80.234", "proxy_mode": "iptables", "bootstrapped": false}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node04.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node04.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node04.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node04.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node04-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node04-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node04.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node04-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.80.234", "hostname": "sp-os-node04.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "dns_domain": "cluster.local", "is_master_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node04.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.80.234", "all_hostnames": ["kubernetes.default", "172.30.80.234", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift", "sp-os-node04.os.ad.scanplus.de"], "public_hostname": "sp-os-node04.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "172.30.80.234", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift", "sp-os-node04.os.ad.scanplus.de"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node04.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "172.30.80.234", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift", "sp-os-node04.os.ad.scanplus.de" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node04.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "172.30.80.234", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift", "sp-os-node04.os.ad.scanplus.de" ], "ip": "172.30.80.234", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node04.os.ad.scanplus.de", "public_ip": "172.30.80.234", "raw_hostname": "sp-os-node04.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node04.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node04.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node04.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node04-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node04-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node04-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node04.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node04.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": false, "dns_ip": "172.30.80.234", "labels": { "region": "primary", "zone": "RZ-LM07" }, "nodename": "sp-os-node04.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } TASK [Set fact of no_proxy_internal_hostnames] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:42 Wednesday 09 January 2019 15:42:01 +0100 (0:00:43.180) 0:02:35.794 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Initialize openshift.node.sdn_mtu] ************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:60 Wednesday 09 January 2019 15:42:03 +0100 (0:00:02.152) 0:02:37.946 ***** Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node03.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node04.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"dns_ip": "172.30.80.240", "proxy_mode": "iptables", "nodename": "sp-os-master01.os.ad.scanplus.de", "bootstrapped": true, "sdn_mtu": "1450"}, "builddefaults": {"config": {"BuildDefaults": {"configuration": {"kind": "BuildDefaultsConfig", "resources": {"requests": {}, "limits": {}}, "env": [], "apiVersion": "v1"}}}}, "logging": {"elasticsearch": {"pvc": {}, "ops": {"pvc": {}}}}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "builddefaults", "logging", "cloudprovider", "master", "hosted", "docker", "buildoverrides"]}, "master": {"public_console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "api_port": "8443", "console_port": "8443", "loopback_user": "system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443", "api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "api_use_ssl": true, "console_path": "/console", "sdn_cluster_network_cidr": "172.18.0.0/17", "loopback_context_name": "default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master", "console_use_ssl": true, "console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "cluster_method": "native", "ha": false, "loopback_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "public_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "admission_plugin_config": {"BuildDefaults": {"configuration": {"kind": "BuildDefaultsConfig", "resources": {"requests": {}, "limits": {}}, "env": [], "apiVersion": "v1"}}, "BuildOverrides": {"configuration": {"kind": "BuildOverridesConfig", "apiVersion": "v1"}}, "openshift.io/ImagePolicy": {"configuration": {"kind": "ImagePolicyConfig", "executionRules": [{"skipOnResolutionFailure": true, "matchImageAnnotations": [{"key": "images.openshift.io/deny-execution", "value": "true"}], "reject": true, "name": "execution-denied", "onResources": [{"resource": "pods"}, {"resource": "builds"}]}], "apiVersion": "v1"}}}, "named_certificates": [{"certfile": "/etc/origin/master/named_certificates/cert.crt", "keyfile": "/etc/origin/master/named_certificates/cert.key", "names": ["sp-os-master01.os.ad.scanplus.de"], "cafile": "/etc/origin/master/named_certificates/ca.crt"}], "manage_htpasswd": true, "loopback_cluster_name": "sp-os-master01-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "controllers_port": "8444", "session_name": "ssn"}, "common": {"is_etcd_system_container": false, "ip": "172.30.80.240", "dns_domain": "cluster.local", "is_master_system_container": false, "public_ip": "172.30.80.240", "public_hostname": "sp-os-master01.os.ad.scanplus.de", "internal_hostnames": ["kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "etcd_runtime": "host", "rolling_restart_mode": "services", "hostname": "sp-os-master01.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "is_openvswitch_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "kube_svc_ip": "172.18.128.1", "config_base": "/etc/origin", "all_hostnames": ["kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "is_containerized": false, "no_proxy_etcd_host_ips": "172.30.80.240", "raw_hostname": "sp-os-master01.os.ad.scanplus.de", "portal_net": "172.18.128.0/17", "deployment_type": "openshift-enterprise"}, "hosted": {"templates": {"kubeconfig": "/tmp/openshift-ansible-DNTbe3/admin.kubeconfig"}, "routers": [{"name": "router", "certificate": "{{ openshift_hosted_router_certificate | default({}) }}", "replicas": "{{ replicas | default(1) }}", "serviceaccount": "router", "namespace": "default", "stats_port": 1936, "edits": "{{ openshift_hosted_router_edits }}", "images": "{{ openshift_hosted_router_image | default(None) }}", "selector": "{{ openshift_hosted_router_selector | default(None) }}", "ports": ["80:80", "443:443"]}], "infra": {"selector": "region=infra"}, "registry": {"force": [false], "name": "docker-registry", "serviceaccount": "registry", "edits": [{"action": "put", "value": {"updatePeriodSeconds": 1, "timeoutSeconds": 600, "maxSurge": "25%", "intervalSeconds": 1, "maxUnavailable": "25%"}, "key": "spec.strategy.rollingParams"}], "selector": "region=infra", "cert": {"expire": {"days": 730}}, "env": {"vars": {}}, "volumes": [], "registryurl": "openshift3/ose-${component}:${version}", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}"}, "router": {"certificate": {"certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key", "cafile": "/etc/origin/master/ca.crt"}, "create_certificate": true, "image": "openshift3/ose-${component}:${version}", "selector": "region=infra", "edits": [{"action": "put", "value": 1, "key": "spec.strategy.rollingParams.intervalSeconds"}, {"action": "put", "value": 1, "key": "spec.strategy.rollingParams.updatePeriodSeconds"}, {"action": "put", "value": 21600, "key": "spec.strategy.activeDeadlineSeconds"}], "registryurl": "openshift3/ose-${component}:${version}", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}"}, "docker": {"registry": {"insecure": {"default": "{{ openshift_docker_hosted_registry_insecure | default(False) }}"}}}, "wfp": {"rc": {"phase": {"msg": "All items completed", "changed": true, "results": [{"_ansible_parsed": true, "stderr_lines": [], "rc": 0, "_ansible_item_result": true, "end": "2018-01-31 14:15:11.698797", "_ansible_no_log": false, "stdout": "Complete", "cmd": ["oc", "get", "replicationcontroller", "router-1", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .metadata.annotations.openshift\\\\.io/deployment\\\\.phase }"], "attempts": 1, "item": [{"name": "router", "certificate": {"keyfile": "/etc/origin/master/openshift-router.key", "certfile": "/etc/origin/master/openshift-router.crt", "cafile": "/etc/origin/master/ca.crt"}, "replicas": "2", "namespace": "default", "serviceaccount": "router", "stats_port": 1936, "edits": [{"action": "put", "value": 1, "key": "spec.strategy.rollingParams.intervalSeconds"}, {"action": "put", "value": 1, "key": "spec.strategy.rollingParams.updatePeriodSeconds"}, {"action": "put", "value": 21600, "key": "spec.strategy.activeDeadlineSeconds"}], "images": "openshift3/ose-${component}:${version}", "selector": "region=infra", "ports": ["80:80", "443:443"]}, {"_ansible_parsed": true, "stderr_lines": [], "_ansible_item_result": true, "end": "2018-01-31 14:15:11.096068", "_ansible_no_log": false, "stdout": "1", "cmd": ["oc", "get", "deploymentconfig", "router", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .status.latestVersion }"], "rc": 0, "item": {"name": "router", "certificate": {"certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key", "cafile": "/etc/origin/master/ca.crt"}, "replicas": "2", "namespace": "default", "serviceaccount": "router", "selector": "region=infra", "edits": [{"action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600}], "images": "openshift3/ose-${component}:${version}", "stats_port": 1936, "ports": ["80:80", "443:443"]}, "delta": "0:00:00.196315", "stderr": "", "changed": true, "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc get deploymentconfig router --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath=\'{ .status.latestVersion }\'", "removes": null, "creates": null, "chdir": null, "stdin": null}}, "stdout_lines": ["1"], "start": "2018-01-31 14:15:10.899753", "_ansible_ignore_errors": null, "failed": false}], "delta": "0:00:00.199963", "stderr": "", "changed": true, "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": false, "_raw_params": "oc get replicationcontroller router-1 --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath=\'{ .metadata.annotations.openshift\\\\.io/deployment\\\\.phase }\'", "removes": null, "warn": true, "chdir": null, "stdin": null}}, "stdout_lines": ["Complete"], "failed_when_result": false, "start": "2018-01-31 14:15:11.498834", "_ansible_ignore_errors": null, "failed": false}]}}}}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "buildoverrides": {"config": {"BuildOverrides": {"configuration": {"kind": "BuildOverridesConfig", "apiVersion": "v1"}}}}}}}\n', "KeyError('ansible_os_family',)\n") (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"nodename": "sp-os-infra01.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"region": "infra", "zone": "RZ-LM07"}, "dns_ip": "172.30.80.241", "proxy_mode": "iptables", "bootstrapped": true}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-infra01.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-infra01.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-infra01.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-infra01.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-infra01-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-infra01-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-infra01.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-infra01-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.80.241", "hostname": "sp-os-infra01.os.ad.scanplus.de", "deployment_subtype": "basic", "is_master_system_container": false, "dns_domain": "cluster.local", "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-infra01.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.80.241", "all_hostnames": ["kubernetes.default", "172.30.80.241", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-infra01.os.ad.scanplus.de", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift"], "public_hostname": "sp-os-infra01.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "172.30.80.241", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-infra01.os.ad.scanplus.de", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node05.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"nodename": "sp-os-infra02.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"region": "infra", "zone": "RZ-LM07"}, "dns_ip": "172.30.80.242", "proxy_mode": "iptables", "bootstrapped": false}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-infra02.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-infra02.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-infra02.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-infra02.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-infra02-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-infra02-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-infra02.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-infra02-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.80.242", "hostname": "sp-os-infra02.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "dns_domain": "cluster.local", "is_master_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-infra02.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.80.242", "all_hostnames": ["sp-os-infra02.os.ad.scanplus.de", "kubernetes.default", "kubernetes", "172.30.80.242", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift"], "public_hostname": "sp-os-infra02.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["sp-os-infra02.os.ad.scanplus.de", "kubernetes.default", "kubernetes", "172.30.80.242", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-infra01.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "172.30.80.241", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-infra01.os.ad.scanplus.de", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-infra01.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "172.30.80.241", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-infra01.os.ad.scanplus.de", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift" ], "ip": "172.30.80.241", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-infra01.os.ad.scanplus.de", "public_ip": "172.30.80.241", "raw_hostname": "sp-os-infra01.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-infra01.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-infra01.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-infra01.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-infra01-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-infra01-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-infra01-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-infra01.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-infra01.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.30.80.241", "labels": { "region": "infra", "zone": "RZ-LM07" }, "nodename": "sp-os-infra01.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node06.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "builddefaults": { "config": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } } } }, "buildoverrides": { "config": { "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } } } }, "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-master01.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "ip": "172.30.80.240", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "no_proxy_etcd_host_ips": "172.30.80.240", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-master01.os.ad.scanplus.de", "public_ip": "172.30.80.240", "raw_hostname": "sp-os-master01.os.ad.scanplus.de", "rolling_restart_mode": "services", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "builddefaults", "logging", "cloudprovider", "master", "hosted", "docker", "buildoverrides" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "hosted": { "docker": { "registry": { "insecure": { "default": "{{ openshift_docker_hosted_registry_insecure | default(False) }}" } } }, "infra": { "selector": "region=infra" }, "registry": { "cert": { "expire": { "days": 730 } }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams", "value": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 } } ], "env": { "vars": {} }, "force": [ false ], "name": "docker-registry", "registryurl": "openshift3/ose-${component}:${version}", "selector": "region=infra", "serviceaccount": "registry", "volumes": [], "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}" }, "router": { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "create_certificate": true, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "image": "openshift3/ose-${component}:${version}", "registryurl": "openshift3/ose-${component}:${version}", "selector": "region=infra", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}" }, "routers": [ { "certificate": "{{ openshift_hosted_router_certificate | default({}) }}", "edits": "{{ openshift_hosted_router_edits }}", "images": "{{ openshift_hosted_router_image | default(None) }}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "{{ replicas | default(1) }}", "selector": "{{ openshift_hosted_router_selector | default(None) }}", "serviceaccount": "router", "stats_port": 1936 } ], "templates": { "kubeconfig": "/tmp/openshift-ansible-DNTbe3/admin.kubeconfig" }, "wfp": { "rc": { "phase": { "changed": true, "msg": "All items completed", "results": [ { "_ansible_ignore_errors": null, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "attempts": 1, "changed": true, "cmd": [ "oc", "get", "replicationcontroller", "router-1", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .metadata.annotations.openshift\\.io/deployment\\.phase }" ], "delta": "0:00:00.199963", "end": "2018-01-31 14:15:11.698797", "failed": false, "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc get replicationcontroller router-1 --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath='{ .metadata.annotations.openshift\\.io/deployment\\.phase }'", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": [ { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "images": "openshift3/ose-${component}:${version}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "2", "selector": "region=infra", "serviceaccount": "router", "stats_port": 1936 }, { "_ansible_ignore_errors": null, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": true, "cmd": [ "oc", "get", "deploymentconfig", "router", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .status.latestVersion }" ], "delta": "0:00:00.196315", "end": "2018-01-31 14:15:11.096068", "failed": false, "invocation": { "module_args": { "_raw_params": "oc get deploymentconfig router --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath='{ .status.latestVersion }'", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "images": "openshift3/ose-${component}:${version}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "2", "selector": "region=infra", "serviceaccount": "router", "stats_port": 1936 }, "rc": 0, "start": "2018-01-31 14:15:10.899753", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": [ "1" ] } ], "rc": 0, "start": "2018-01-31 14:15:11.498834", "stderr": "", "stderr_lines": [], "stdout": "Complete", "stdout_lines": [ "Complete" ] } ] } } } }, "logging": { "elasticsearch": { "ops": { "pvc": {} }, "pvc": {} } }, "master": { "admission_plugin_config": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } }, "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } }, "openshift.io/ImagePolicy": { "configuration": { "apiVersion": "v1", "executionRules": [ { "matchImageAnnotations": [ { "key": "images.openshift.io/deny-execution", "value": "true" } ], "name": "execution-denied", "onResources": [ { "resource": "pods" }, { "resource": "builds" } ], "reject": true, "skipOnResolutionFailure": true } ], "kind": "ImagePolicyConfig" } } }, "api_port": "8443", "api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "cluster_method": "native", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "ha": false, "loopback_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-master01-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443", "manage_htpasswd": true, "named_certificates": [ { "cafile": "/etc/origin/master/named_certificates/ca.crt", "certfile": "/etc/origin/master/named_certificates/cert.crt", "keyfile": "/etc/origin/master/named_certificates/cert.key", "names": [ "sp-os-master01.os.ad.scanplus.de" ] } ], "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "sdn_cluster_network_cidr": "172.18.0.0/17", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.30.80.240", "nodename": "sp-os-master01.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node07.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-infra02.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "sp-os-infra02.os.ad.scanplus.de", "kubernetes.default", "kubernetes", "172.30.80.242", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-infra02.os.ad.scanplus.de", "internal_hostnames": [ "sp-os-infra02.os.ad.scanplus.de", "kubernetes.default", "kubernetes", "172.30.80.242", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift" ], "ip": "172.30.80.242", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-infra02.os.ad.scanplus.de", "public_ip": "172.30.80.242", "raw_hostname": "sp-os-infra02.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-infra02.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-infra02.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-infra02.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-infra02-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-infra02-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-infra02-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-infra02.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-infra02.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": false, "dns_ip": "172.30.80.242", "labels": { "region": "infra", "zone": "RZ-LM07" }, "nodename": "sp-os-infra02.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node08.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node09.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"nodename": "sp-os-node03.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"region": "primary", "zone": "RZ-LM07"}, "dns_ip": "172.30.80.233", "proxy_mode": "iptables", "bootstrapped": true}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node03.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node03.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node03.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node03.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node03-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node03-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node03.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node03-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.80.233", "hostname": "sp-os-node03.os.ad.scanplus.de", "deployment_subtype": "basic", "is_master_system_container": false, "dns_domain": "cluster.local", "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node03.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.80.233", "all_hostnames": ["172.30.80.233", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-node03.os.ad.scanplus.de", "kubernetes.default.svc", "openshift"], "public_hostname": "sp-os-node03.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["172.30.80.233", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-node03.os.ad.scanplus.de", "kubernetes.default.svc", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node10.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node03.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "172.30.80.233", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-node03.os.ad.scanplus.de", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node03.os.ad.scanplus.de", "internal_hostnames": [ "172.30.80.233", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "sp-os-node03.os.ad.scanplus.de", "kubernetes.default.svc", "openshift" ], "ip": "172.30.80.233", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node03.os.ad.scanplus.de", "public_ip": "172.30.80.233", "raw_hostname": "sp-os-node03.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node03.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node03.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node03.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node03-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node03-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node03-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node03.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node03.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.30.80.233", "labels": { "region": "primary", "zone": "RZ-LM07" }, "nodename": "sp-os-node03.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"nodename": "sp-os-node02.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"region": "primary", "zone": "RZ-LM07"}, "dns_ip": "172.30.80.244", "proxy_mode": "iptables", "bootstrapped": false}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node02.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node02.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node02.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node02.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node02-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node02-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node02.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node02-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.80.244", "hostname": "sp-os-node02.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "dns_domain": "cluster.local", "is_master_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node02.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.80.244", "all_hostnames": ["sp-os-node02.os.ad.scanplus.de", "172.30.80.244", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "public_hostname": "sp-os-node02.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["sp-os-node02.os.ad.scanplus.de", "172.30.80.244", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node11.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node02.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "sp-os-node02.os.ad.scanplus.de", "172.30.80.244", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node02.os.ad.scanplus.de", "internal_hostnames": [ "sp-os-node02.os.ad.scanplus.de", "172.30.80.244", "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "ip": "172.30.80.244", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node02.os.ad.scanplus.de", "public_ip": "172.30.80.244", "raw_hostname": "sp-os-node02.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node02.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node02.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node02.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node02-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node02-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node02-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node02.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node02.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": false, "dns_ip": "172.30.80.244", "labels": { "region": "primary", "zone": "RZ-LM07" }, "nodename": "sp-os-node02.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node12.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node09.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75"}, "dns_ip": "172.29.80.170", "proxy_mode": "iptables", "bootstrapped": true}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node09.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node09.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node09.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node09.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node09-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node09-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node09.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node09-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.29.80.170", "hostname": "sp-os-node09.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "dns_domain": "cluster.local", "is_master_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node09.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.29.80.170", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node09.os.ad.scanplus.de", "172.29.80.170", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "public_hostname": "sp-os-node09.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node09.os.ad.scanplus.de", "172.29.80.170", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node09.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node09.os.ad.scanplus.de", "172.29.80.170", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node09.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node09.os.ad.scanplus.de", "172.29.80.170", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "ip": "172.29.80.170", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node09.os.ad.scanplus.de", "public_ip": "172.29.80.170", "raw_hostname": "sp-os-node09.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node09.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node09.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node09.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node09-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node09-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node09-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node09.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node09.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.29.80.170", "labels": { "nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75" }, "nodename": "sp-os-node09.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node10.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75"}, "dns_ip": "172.29.80.171", "proxy_mode": "iptables", "bootstrapped": false}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node10.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node10.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node10.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node10.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node10-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node10-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node10.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node10-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.29.80.171", "hostname": "sp-os-node10.os.ad.scanplus.de", "deployment_subtype": "basic", "is_master_system_container": false, "dns_domain": "cluster.local", "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node10.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.29.80.171", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node10.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.171", "kubernetes.default.svc", "openshift"], "public_hostname": "sp-os-node10.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node10.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.171", "kubernetes.default.svc", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node10.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node10.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.171", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node10.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "sp-os-node10.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.171", "kubernetes.default.svc", "openshift" ], "ip": "172.29.80.171", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node10.os.ad.scanplus.de", "public_ip": "172.29.80.171", "raw_hostname": "sp-os-node10.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node10.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node10.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node10.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node10-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node10-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node10-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node10.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node10.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": false, "dns_ip": "172.29.80.171", "labels": { "nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75" }, "nodename": "sp-os-node10.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node11.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75"}, "dns_ip": "172.29.80.172", "proxy_mode": "iptables", "bootstrapped": true}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node11.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node11.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node11.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node11.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node11-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node11-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node11.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node11-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.29.80.172", "hostname": "sp-os-node11.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "dns_domain": "cluster.local", "is_master_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node11.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.29.80.172", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-node11.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.29.80.172", "kubernetes.default.svc", "openshift"], "public_hostname": "sp-os-node11.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-node11.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.29.80.172", "kubernetes.default.svc", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node11.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-node11.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.29.80.172", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node11.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-node11.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.29.80.172", "kubernetes.default.svc", "openshift" ], "ip": "172.29.80.172", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node11.os.ad.scanplus.de", "public_ip": "172.29.80.172", "raw_hostname": "sp-os-node11.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node11.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node11.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node11.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node11-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node11-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node11-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node11.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node11.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.29.80.172", "labels": { "nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75" }, "nodename": "sp-os-node11.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node12.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75"}, "dns_ip": "172.29.80.173", "proxy_mode": "iptables", "bootstrapped": false}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node12.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node12.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node12.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node12.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node12-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node12-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node12.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node12-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.29.80.173", "hostname": "sp-os-node12.os.ad.scanplus.de", "deployment_subtype": "basic", "is_master_system_container": false, "dns_domain": "cluster.local", "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node12.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.29.80.173", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node12.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.173", "openshift"], "public_hostname": "sp-os-node12.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node12.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.173", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node12.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node12.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.173", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node12.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node12.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.29.80.173", "openshift" ], "ip": "172.29.80.173", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node12.os.ad.scanplus.de", "public_ip": "172.29.80.173", "raw_hostname": "sp-os-node12.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node12.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node12.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node12.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node12-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node12-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node12-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node12.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node12.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": false, "dns_ip": "172.29.80.173", "labels": { "nodeusage": "prod", "region": "primary", "zone": "RZ-FFM-KL75" }, "nodename": "sp-os-node12.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node06.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "dev", "region": "primary", "zone": "RZ-LM07"}, "dns_ip": "172.30.81.89", "proxy_mode": "iptables", "bootstrapped": false}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node06.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node06.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node06.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node06.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node06-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node06-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node06.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node06-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.81.89", "hostname": "sp-os-node06.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "dns_domain": "cluster.local", "is_master_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node06.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.81.89", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node06.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.30.81.89", "openshift"], "public_hostname": "sp-os-node06.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node06.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.30.81.89", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node06.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node06.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.30.81.89", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node06.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "kubernetes.default.svc", "sp-os-node06.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "172.30.81.89", "openshift" ], "ip": "172.30.81.89", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node06.os.ad.scanplus.de", "public_ip": "172.30.81.89", "raw_hostname": "sp-os-node06.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node06.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node06.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node06.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node06-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node06-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node06-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node06.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node06.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": false, "dns_ip": "172.30.81.89", "labels": { "nodeusage": "dev", "region": "primary", "zone": "RZ-LM07" }, "nodename": "sp-os-node06.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node05.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "dev", "region": "primary", "zone": "RZ-LM07"}, "dns_ip": "172.30.81.88", "proxy_mode": "iptables", "bootstrapped": true}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node05.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node05.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node05.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node05.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node05-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node05-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node05.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node05-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.81.88", "hostname": "sp-os-node05.os.ad.scanplus.de", "deployment_subtype": "basic", "is_master_system_container": false, "dns_domain": "cluster.local", "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node05.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.81.88", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.30.81.88", "kubernetes.default.svc", "openshift", "sp-os-node05.os.ad.scanplus.de"], "public_hostname": "sp-os-node05.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.30.81.88", "kubernetes.default.svc", "openshift", "sp-os-node05.os.ad.scanplus.de"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node05.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.30.81.88", "kubernetes.default.svc", "openshift", "sp-os-node05.os.ad.scanplus.de" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node05.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "172.30.81.88", "kubernetes.default.svc", "openshift", "sp-os-node05.os.ad.scanplus.de" ], "ip": "172.30.81.88", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node05.os.ad.scanplus.de", "public_ip": "172.30.81.88", "raw_hostname": "sp-os-node05.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node05.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node05.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node05.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node05-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node05-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node05-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node05.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node05.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.30.81.88", "labels": { "nodeusage": "dev", "region": "primary", "zone": "RZ-LM07" }, "nodename": "sp-os-node05.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node08.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "dev", "region": "primary", "zone": "RZ-LM07"}, "dns_ip": "172.30.81.91", "proxy_mode": "iptables", "bootstrapped": false}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node08.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node08.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node08.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node08.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node08-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node08-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node08.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node08-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.81.91", "hostname": "sp-os-node08.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "dns_domain": "cluster.local", "is_master_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node08.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.81.91", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.91", "openshift.default.svc", "sp-os-node08.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "public_hostname": "sp-os-node08.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.91", "openshift.default.svc", "sp-os-node08.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node08.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.91", "openshift.default.svc", "sp-os-node08.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node08.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.91", "openshift.default.svc", "sp-os-node08.os.ad.scanplus.de", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "ip": "172.30.81.91", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node08.os.ad.scanplus.de", "public_ip": "172.30.81.91", "raw_hostname": "sp-os-node08.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node08.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node08.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node08.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node08-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node08-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node08-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node08.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node08.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": false, "dns_ip": "172.30.81.91", "labels": { "nodeusage": "dev", "region": "primary", "zone": "RZ-LM07" }, "nodename": "sp-os-node08.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"schedulable": "false", "nodename": "sp-os-node07.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"nodeusage": "dev", "region": "primary", "zone": "RZ-LM07"}, "dns_ip": "172.30.81.90", "proxy_mode": "iptables", "bootstrapped": true}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node07.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node07.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node07.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node07.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node07-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node07-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node07.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node07-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.81.90", "hostname": "sp-os-node07.os.ad.scanplus.de", "deployment_subtype": "basic", "is_master_system_container": false, "dns_domain": "cluster.local", "is_node_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node07.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.81.90", "all_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.90", "sp-os-node07.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "public_hostname": "sp-os-node07.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.90", "sp-os-node07.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node07.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.90", "sp-os-node07.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node07.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "172.30.81.90", "sp-os-node07.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "ip": "172.30.81.90", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node07.os.ad.scanplus.de", "public_ip": "172.30.81.90", "raw_hostname": "sp-os-node07.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node07.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node07.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node07.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node07-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node07-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node07-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node07.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node07.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.30.81.90", "labels": { "nodeusage": "dev", "region": "primary", "zone": "RZ-LM07" }, "nodename": "sp-os-node07.os.ad.scanplus.de", "proxy_mode": "iptables", "schedulable": "false", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"nodename": "sp-os-node04.os.ad.scanplus.de", "sdn_mtu": "1450", "labels": {"region": "primary", "zone": "RZ-LM07"}, "dns_ip": "172.30.80.234", "proxy_mode": "iptables", "bootstrapped": false}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "docker", "master", "cloudprovider"]}, "master": {"loopback_api_url": "https://sp-os-node04.os.ad.scanplus.de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node04.os.ad.scanplus.de:8443", "console_port": "8443", "api_url": "https://sp-os-node04.os.ad.scanplus.de:8443", "console_path": "/console", "public_console_url": "https://sp-os-node04.os.ad.scanplus.de:8443/console", "loopback_cluster_name": "sp-os-node04-os-ad-scanplus-de:8443", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "api_use_ssl": true, "loopback_context_name": "default/sp-os-node04-os-ad-scanplus-de:8443/system:openshift-master", "controllers_port": "8444", "console_url": "https://sp-os-node04.os.ad.scanplus.de:8443/console", "api_port": "8443", "session_name": "ssn", "loopback_user": "system:openshift-master/sp-os-node04-os-ad-scanplus-de:8443", "console_use_ssl": true}, "common": {"config_base": "/etc/origin", "etcd_runtime": "host", "is_etcd_system_container": false, "ip": "172.30.80.234", "hostname": "sp-os-node04.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "dns_domain": "cluster.local", "is_master_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "raw_hostname": "sp-os-node04.os.ad.scanplus.de", "is_containerized": false, "public_ip": "172.30.80.234", "all_hostnames": ["kubernetes.default", "172.30.80.234", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift", "sp-os-node04.os.ad.scanplus.de"], "public_hostname": "sp-os-node04.os.ad.scanplus.de", "is_openvswitch_system_container": false, "deployment_type": "openshift-enterprise", "portal_net": "172.18.128.0/17", "internal_hostnames": ["kubernetes.default", "172.30.80.234", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift", "sp-os-node04.os.ad.scanplus.de"], "kube_svc_ip": "172.18.128.1"}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-node04.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "172.30.80.234", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift", "sp-os-node04.os.ad.scanplus.de" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-node04.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "172.30.80.234", "kubernetes", "openshift.default", "172.18.128.1", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "openshift", "sp-os-node04.os.ad.scanplus.de" ], "ip": "172.30.80.234", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-node04.os.ad.scanplus.de", "public_ip": "172.30.80.234", "raw_hostname": "sp-os-node04.os.ad.scanplus.de", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "docker", "master", "cloudprovider" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "master": { "api_port": "8443", "api_url": "https://sp-os-node04.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-node04.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "loopback_api_url": "https://sp-os-node04.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-node04-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-node04-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-node04-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-node04.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-node04.os.ad.scanplus.de:8443/console", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": false, "dns_ip": "172.30.80.234", "labels": { "region": "primary", "zone": "RZ-LM07" }, "nodename": "sp-os-node04.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } TASK [set_fact l_kubelet_node_name] ***************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:65 Wednesday 09 January 2019 15:42:47 +0100 (0:00:43.470) 0:03:21.417 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-master01.os.ad.scanplus.de" }, "changed": false } ok: [sp-os-infra01.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-infra01.os.ad.scanplus.de" }, "changed": false } ok: [sp-os-infra02.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-infra02.os.ad.scanplus.de" }, "changed": false } ok: [sp-os-node02.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-node02.os.ad.scanplus.de" }, "changed": false } ok: [sp-os-node03.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-node03.os.ad.scanplus.de" }, "changed": false } ok: [sp-os-node04.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-node04.os.ad.scanplus.de" }, "changed": false } ok: [sp-os-node05.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-node05.os.ad.scanplus.de" }, "changed": false } ok: [sp-os-node06.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-node06.os.ad.scanplus.de" }, "changed": false } ok: [sp-os-node07.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-node07.os.ad.scanplus.de" }, "changed": false } ok: [sp-os-node08.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-node08.os.ad.scanplus.de" }, "changed": false } ok: [sp-os-node09.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-node09.os.ad.scanplus.de" }, "changed": false } ok: [sp-os-node10.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-node10.os.ad.scanplus.de" }, "changed": false } ok: [sp-os-node11.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-node11.os.ad.scanplus.de" }, "changed": false } ok: [sp-os-node12.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-node12.os.ad.scanplus.de" }, "changed": false } META: ran handlers META: ran handlers PLAY [Initialize etcd host variables] *************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:78 Wednesday 09 January 2019 15:42:49 +0100 (0:00:02.361) 0:03:23.778 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_master_etcd_hosts": [ "sp-os-master01.os.ad.scanplus.de" ], "openshift_master_etcd_port": "2379", "openshift_no_proxy_etcd_host_ips": "172.30.80.240" }, "changed": false } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:89 Wednesday 09 January 2019 15:42:49 +0100 (0:00:00.321) 0:03:24.099 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_master_etcd_urls": [ "https://sp-os-master01.os.ad.scanplus.de:2379" ] }, "changed": false } META: ran handlers META: ran handlers PLAY [Determine openshift_version to configure on first master] ************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [include_role : openshift_version] ************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/version.yml:5 Wednesday 09 January 2019 15:42:50 +0100 (0:00:00.156) 0:03:24.255 ***** TASK [openshift_version : Use openshift_current_version fact as version to configure if already installed] ****************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:6 Wednesday 09 January 2019 15:42:50 +0100 (0:00:00.225) 0:03:24.481 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_version : Set openshift_version to openshift_release if undefined] ****************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:14 Wednesday 09 January 2019 15:42:50 +0100 (0:00:00.125) 0:03:24.607 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_version": "3.11" }, "changed": false } TASK [openshift_version : debug] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:21 Wednesday 09 January 2019 15:42:50 +0100 (0:00:00.144) 0:03:24.751 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "msg": "openshift_pkg_version was not defined. Falling back to -3.11" } TASK [openshift_version : set_fact] ***************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:23 Wednesday 09 January 2019 15:42:50 +0100 (0:00:00.137) 0:03:24.889 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_pkg_version": "-3.11*" }, "changed": false } TASK [openshift_version : debug] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:30 Wednesday 09 January 2019 15:42:50 +0100 (0:00:00.143) 0:03:25.032 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "msg": "openshift_image_tag was not defined. Falling back to v3.11" } TASK [openshift_version : set_fact] ***************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:32 Wednesday 09 January 2019 15:42:50 +0100 (0:00:00.149) 0:03:25.182 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_image_tag": "v3.11" }, "changed": false } TASK [openshift_version : assert openshift_release in openshift_image_tag] ************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:36 Wednesday 09 January 2019 15:42:51 +0100 (0:00:00.153) 0:03:25.336 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "msg": "All assertions passed" } TASK [openshift_version : assert openshift_release in openshift_pkg_version] ************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:43 Wednesday 09 January 2019 15:42:51 +0100 (0:00:00.142) 0:03:25.479 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "msg": "All assertions passed" } TASK [openshift_version : debug] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:51 Wednesday 09 January 2019 15:42:51 +0100 (0:00:00.164) 0:03:25.643 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "openshift_release": "3.11" } TASK [openshift_version : debug] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:53 Wednesday 09 January 2019 15:42:51 +0100 (0:00:00.309) 0:03:25.953 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "openshift_image_tag": "v3.11" } TASK [openshift_version : debug] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:55 Wednesday 09 January 2019 15:42:52 +0100 (0:00:00.430) 0:03:26.383 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "openshift_pkg_version": "-3.11*" } TASK [openshift_version : debug] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:57 Wednesday 09 January 2019 15:42:52 +0100 (0:00:00.147) 0:03:26.531 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "openshift_version": "3.11" } META: ran handlers META: ran handlers PLAY [Set openshift_version for etcd, node, and master hosts] *************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/version.yml:20 Wednesday 09 January 2019 15:42:52 +0100 (0:00:00.142) 0:03:26.674 ***** ok: [sp-os-infra01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_image_tag": "v3.11", "openshift_pkg_version": "-3.11*", "openshift_version": "3.11" }, "changed": false } ok: [sp-os-infra02.os.ad.scanplus.de] => { "ansible_facts": { "openshift_image_tag": "v3.11", "openshift_pkg_version": "-3.11*", "openshift_version": "3.11" }, "changed": false } ok: [sp-os-node02.os.ad.scanplus.de] => { "ansible_facts": { "openshift_image_tag": "v3.11", "openshift_pkg_version": "-3.11*", "openshift_version": "3.11" }, "changed": false } ok: [sp-os-node03.os.ad.scanplus.de] => { "ansible_facts": { "openshift_image_tag": "v3.11", "openshift_pkg_version": "-3.11*", "openshift_version": "3.11" }, "changed": false } ok: [sp-os-node04.os.ad.scanplus.de] => { "ansible_facts": { "openshift_image_tag": "v3.11", "openshift_pkg_version": "-3.11*", "openshift_version": "3.11" }, "changed": false } ok: [sp-os-node05.os.ad.scanplus.de] => { "ansible_facts": { "openshift_image_tag": "v3.11", "openshift_pkg_version": "-3.11*", "openshift_version": "3.11" }, "changed": false } ok: [sp-os-node06.os.ad.scanplus.de] => { "ansible_facts": { "openshift_image_tag": "v3.11", "openshift_pkg_version": "-3.11*", "openshift_version": "3.11" }, "changed": false } ok: [sp-os-node07.os.ad.scanplus.de] => { "ansible_facts": { "openshift_image_tag": "v3.11", "openshift_pkg_version": "-3.11*", "openshift_version": "3.11" }, "changed": false } ok: [sp-os-node08.os.ad.scanplus.de] => { "ansible_facts": { "openshift_image_tag": "v3.11", "openshift_pkg_version": "-3.11*", "openshift_version": "3.11" }, "changed": false } ok: [sp-os-node09.os.ad.scanplus.de] => { "ansible_facts": { "openshift_image_tag": "v3.11", "openshift_pkg_version": "-3.11*", "openshift_version": "3.11" }, "changed": false } ok: [sp-os-node10.os.ad.scanplus.de] => { "ansible_facts": { "openshift_image_tag": "v3.11", "openshift_pkg_version": "-3.11*", "openshift_version": "3.11" }, "changed": false } ok: [sp-os-node11.os.ad.scanplus.de] => { "ansible_facts": { "openshift_image_tag": "v3.11", "openshift_pkg_version": "-3.11*", "openshift_version": "3.11" }, "changed": false } ok: [sp-os-node12.os.ad.scanplus.de] => { "ansible_facts": { "openshift_image_tag": "v3.11", "openshift_pkg_version": "-3.11*", "openshift_version": "3.11" }, "changed": false } META: ran handlers META: ran handlers PLAY [Verify Requirements] ************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [Run variable sanity checks] ******************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/sanity_checks.yml:14 Wednesday 09 January 2019 15:42:54 +0100 (0:00:02.508) 0:03:29.182 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "msg": "Sanity Checks passed" } TASK [Validate openshift_node_groups and openshift_node_group_name] ********************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/sanity_checks.yml:18 Wednesday 09 January 2019 15:45:37 +0100 (0:02:42.259) 0:06:11.442 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "msg": "Node group checks passed" } META: ran handlers META: ran handlers PLAY [Verify Node NetworkManager] ******************************************************************************************************************************************************************************************************************************************************************************************* skipping: no hosts matched PLAY [Initialization Checkpoint End] **************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [Set install initialization 'Complete'] ******************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/main.yml:44 Wednesday 09 January 2019 15:45:39 +0100 (0:00:01.984) 0:06:13.427 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_stats": { "aggregate": true, "data": { "installer_phase_initialize": { "end": "20190109154539Z", "status": "Complete" } }, "per_host": false }, "changed": false } META: ran handlers META: ran handlers PLAY [Update registry authentication credentials] *************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [Install registry_auth dependencies] *********************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:7 Wednesday 09 January 2019 15:45:39 +0100 (0:00:00.165) 0:06:13.593 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node03.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node04.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node05.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node06.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node07.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node08.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node09.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node10.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node11.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node12.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic", "skopeo"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed"], "rc": 0}\n', '') ok: [sp-os-node02.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic", "skopeo" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed" ] } (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic", "skopeo"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed"], "rc": 0}\n', '') ok: [sp-os-infra02.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic", "skopeo" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed" ] } (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic", "skopeo"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed"], "rc": 0}\n', '') ok: [sp-os-infra01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic", "skopeo" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed" ] } (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic", "skopeo"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed"], "rc": 0}\n', '') ok: [sp-os-node07.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic", "skopeo" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed" ] } (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic", "skopeo"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed"], "rc": 0}\n', '') ok: [sp-os-node05.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic", "skopeo" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed" ] } (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic", "skopeo"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed"], "rc": 0}\n', '') ok: [sp-os-node06.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic", "skopeo" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed" ] } (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic", "skopeo"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed"], "rc": 0}\n', '') ok: [sp-os-node03.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic", "skopeo" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed" ] } (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic", "skopeo"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed"], "rc": 0}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic", "skopeo" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed" ] } (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic", "skopeo"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed"], "rc": 0}\n', '') ok: [sp-os-node11.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic", "skopeo" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed" ] } (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic", "skopeo"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed"], "rc": 0}\n', '') ok: [sp-os-node09.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic", "skopeo" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed" ] } (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic", "skopeo"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed"], "rc": 0}\n', '') ok: [sp-os-node12.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic", "skopeo" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed" ] } (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic", "skopeo"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed"], "rc": 0}\n', '') ok: [sp-os-node10.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic", "skopeo" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed" ] } (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic", "skopeo"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed"], "rc": 0}\n', '') ok: [sp-os-node08.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic", "skopeo" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed" ] } (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic", "skopeo"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed"], "rc": 0}\n', '') ok: [sp-os-node04.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic", "skopeo" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:atomic-1.22.1-26.gitb507039.el7.x86_64 providing atomic is already installed", "1:skopeo-0.1.31-1.dev.gitae64ff7.el7.x86_64 providing skopeo is already installed" ] } TASK [openshift_node : Check for credentials file for registry auth] ******************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/registry_auth.yml:4 Wednesday 09 January 2019 15:48:30 +0100 (0:02:51.298) 0:09:04.891 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/origin/.docker", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019096.2027433, "block_size": 4096, "inode": 519865, "isgid": false, "size": 4096, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/var/lib/origin/.docker", "xusr": true, "atime": 1547019090.7756386, "isdir": true, "ctime": 1547019096.2027433, "isblk": false, "xgrp": false, "dev": 64771, "roth": false, "isfifo": false, "mode": "0700", "rusr": true}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/origin/.docker", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019096.20028, "block_size": 4096, "inode": 519396, "isgid": false, "size": 4096, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/var/lib/origin/.docker", "xusr": true, "atime": 1547019090.7711923, "isdir": true, "ctime": 1547019096.20028, "isblk": false, "xgrp": false, "dev": 64771, "roth": false, "isfifo": false, "mode": "0700", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/origin/.docker" } }, "stat": { "atime": 1547019090.7756386, "block_size": 4096, "blocks": 8, "ctime": 1547019096.2027433, "dev": 64771, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 519865, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0700", "mtime": 1547019096.2027433, "nlink": 2, "path": "/var/lib/origin/.docker", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 4096, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/origin/.docker", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019097.496022, "block_size": 4096, "inode": 389727, "isgid": false, "size": 4096, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/var/lib/origin/.docker", "xusr": true, "atime": 1547019090.787856, "isdir": true, "ctime": 1547019097.496022, "isblk": false, "xgrp": false, "dev": 64771, "roth": false, "isfifo": false, "mode": "0700", "rusr": true}, "changed": false}\n', '') (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/origin/.docker", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019096.301668, "block_size": 4096, "inode": 389464, "isgid": false, "size": 4096, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/var/lib/origin/.docker", "xusr": true, "atime": 1547019091.0995774, "isdir": true, "ctime": 1547019096.301668, "isblk": false, "xgrp": false, "dev": 64772, "roth": false, "isfifo": false, "mode": "0700", "rusr": true}, "changed": false}\n', '') ok: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/origin/.docker" } }, "stat": { "atime": 1547019090.7711923, "block_size": 4096, "blocks": 8, "ctime": 1547019096.20028, "dev": 64771, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 519396, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0700", "mtime": 1547019096.20028, "nlink": 2, "path": "/var/lib/origin/.docker", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 4096, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node03.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/origin/.docker" } }, "stat": { "atime": 1547019090.787856, "block_size": 4096, "blocks": 8, "ctime": 1547019097.496022, "dev": 64771, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 389727, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0700", "mtime": 1547019097.496022, "nlink": 2, "path": "/var/lib/origin/.docker", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 4096, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/origin/.docker", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019096.2506962, "block_size": 4096, "inode": 390506, "isgid": false, "size": 4096, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/var/lib/origin/.docker", "xusr": true, "atime": 1547019091.1935937, "isdir": true, "ctime": 1547019096.2506962, "isblk": false, "xgrp": false, "dev": 64771, "roth": false, "isfifo": false, "mode": "0700", "rusr": true}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node04.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/origin/.docker", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019097.4711318, "block_size": 4096, "inode": 389971, "isgid": false, "size": 4096, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/var/lib/origin/.docker", "xusr": true, "atime": 1547019091.3459828, "isdir": true, "ctime": 1547019097.4711318, "isblk": false, "xgrp": false, "dev": 64771, "roth": false, "isfifo": false, "mode": "0700", "rusr": true}, "changed": false}\n', '') ok: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/origin/.docker" } }, "stat": { "atime": 1547019091.0995774, "block_size": 4096, "blocks": 8, "ctime": 1547019096.301668, "dev": 64772, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 389464, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0700", "mtime": 1547019096.301668, "nlink": 2, "path": "/var/lib/origin/.docker", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 4096, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node05.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/origin/.docker" } }, "stat": { "atime": 1547019091.1935937, "block_size": 4096, "blocks": 8, "ctime": 1547019096.2506962, "dev": 64771, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 390506, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0700", "mtime": 1547019096.2506962, "nlink": 2, "path": "/var/lib/origin/.docker", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 4096, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/origin/.docker", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019096.688204, "block_size": 4096, "inode": 1810, "isgid": false, "size": 4096, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/var/lib/origin/.docker", "xusr": true, "atime": 1547019091.575141, "isdir": true, "ctime": 1547019096.688204, "isblk": false, "xgrp": false, "dev": 64771, "roth": false, "isfifo": false, "mode": "0700", "rusr": true}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node06.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node04.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/origin/.docker" } }, "stat": { "atime": 1547019091.3459828, "block_size": 4096, "blocks": 8, "ctime": 1547019097.4711318, "dev": 64771, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 389971, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0700", "mtime": 1547019097.4711318, "nlink": 2, "path": "/var/lib/origin/.docker", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 4096, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/origin/.docker", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019096.827717, "block_size": 4096, "inode": 1308, "isgid": false, "size": 4096, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/var/lib/origin/.docker", "xusr": true, "atime": 1547019091.725634, "isdir": true, "ctime": 1547019096.827717, "isblk": false, "xgrp": false, "dev": 64771, "roth": false, "isfifo": false, "mode": "0700", "rusr": true}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node07.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/origin/.docker" } }, "stat": { "atime": 1547019091.575141, "block_size": 4096, "blocks": 8, "ctime": 1547019096.688204, "dev": 64771, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 1810, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0700", "mtime": 1547019096.688204, "nlink": 2, "path": "/var/lib/origin/.docker", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 4096, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/origin/.docker", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019097.408408, "block_size": 4096, "inode": 8676, "isgid": false, "size": 4096, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/var/lib/origin/.docker", "xusr": true, "atime": 1547019091.9562736, "isdir": true, "ctime": 1547019097.408408, "isblk": false, "xgrp": false, "dev": 64771, "roth": false, "isfifo": false, "mode": "0700", "rusr": true}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node08.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' ok: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/origin/.docker" } }, "stat": { "atime": 1547019091.725634, "block_size": 4096, "blocks": 8, "ctime": 1547019096.827717, "dev": 64771, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 1308, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0700", "mtime": 1547019096.827717, "nlink": 2, "path": "/var/lib/origin/.docker", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 4096, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node09.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/origin/.docker", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019097.9523554, "block_size": 4096, "inode": 1372, "isgid": false, "size": 4096, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/var/lib/origin/.docker", "xusr": true, "atime": 1547019092.1652548, "isdir": true, "ctime": 1547019097.9523554, "isblk": false, "xgrp": false, "dev": 64771, "roth": false, "isfifo": false, "mode": "0700", "rusr": true}, "changed": false}\n', '') ok: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/origin/.docker" } }, "stat": { "atime": 1547019091.9562736, "block_size": 4096, "blocks": 8, "ctime": 1547019097.408408, "dev": 64771, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 8676, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0700", "mtime": 1547019097.408408, "nlink": 2, "path": "/var/lib/origin/.docker", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 4096, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node10.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/origin/.docker", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019097.8763108, "block_size": 4096, "inode": 1764, "isgid": false, "size": 4096, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/var/lib/origin/.docker", "xusr": true, "atime": 1547019092.243207, "isdir": true, "ctime": 1547019097.8763108, "isblk": false, "xgrp": false, "dev": 64772, "roth": false, "isfifo": false, "mode": "0700", "rusr": true}, "changed": false}\n', '') ok: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/origin/.docker" } }, "stat": { "atime": 1547019092.1652548, "block_size": 4096, "blocks": 8, "ctime": 1547019097.9523554, "dev": 64771, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 1372, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0700", "mtime": 1547019097.9523554, "nlink": 2, "path": "/var/lib/origin/.docker", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 4096, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node11.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/origin/.docker", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019097.5610285, "block_size": 4096, "inode": 1501, "isgid": false, "size": 4096, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/var/lib/origin/.docker", "xusr": true, "atime": 1547019092.3819728, "isdir": true, "ctime": 1547019097.5610285, "isblk": false, "xgrp": false, "dev": 64771, "roth": false, "isfifo": false, "mode": "0700", "rusr": true}, "changed": false}\n', '') ok: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/origin/.docker" } }, "stat": { "atime": 1547019092.243207, "block_size": 4096, "blocks": 8, "ctime": 1547019097.8763108, "dev": 64772, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 1764, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0700", "mtime": 1547019097.8763108, "nlink": 2, "path": "/var/lib/origin/.docker", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 4096, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } ok: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/origin/.docker" } }, "stat": { "atime": 1547019092.3819728, "block_size": 4096, "blocks": 8, "ctime": 1547019097.5610285, "dev": 64771, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 1501, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0700", "mtime": 1547019097.5610285, "nlink": 2, "path": "/var/lib/origin/.docker", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 4096, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node12.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/origin/.docker", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019098.7150896, "block_size": 4096, "inode": 1755, "isgid": false, "size": 4096, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/var/lib/origin/.docker", "xusr": true, "atime": 1547019092.5729764, "isdir": true, "ctime": 1547019098.7150896, "isblk": false, "xgrp": false, "dev": 64771, "roth": false, "isfifo": false, "mode": "0700", "rusr": true}, "changed": false}\n', '') ok: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/origin/.docker" } }, "stat": { "atime": 1547019092.5729764, "block_size": 4096, "blocks": 8, "ctime": 1547019098.7150896, "dev": 64771, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 1755, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0700", "mtime": 1547019098.7150896, "nlink": 2, "path": "/var/lib/origin/.docker", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 4096, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/origin/.docker", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019098.8558843, "block_size": 4096, "inode": 1559, "isgid": false, "size": 4096, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/var/lib/origin/.docker", "xusr": true, "atime": 1547019092.6577187, "isdir": true, "ctime": 1547019098.8558843, "isblk": false, "xgrp": false, "dev": 64771, "roth": false, "isfifo": false, "mode": "0700", "rusr": true}, "changed": false}\n', '') ok: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/origin/.docker" } }, "stat": { "atime": 1547019092.6577187, "block_size": 4096, "blocks": 8, "ctime": 1547019098.8558843, "dev": 64771, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 1559, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0700", "mtime": 1547019098.8558843, "nlink": 2, "path": "/var/lib/origin/.docker", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 4096, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [openshift_node : Create credentials for registry auth] **************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/registry_auth.yml:16 Wednesday 09 January 2019 15:48:33 +0100 (0:00:02.576) 0:09:07.467 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node03.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node04.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node05.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node06.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node07.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node08.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node09.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node10.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node11.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node12.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') ok: [sp-os-node02.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') ok: [sp-os-infra01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') ok: [sp-os-infra02.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') ok: [sp-os-node03.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') ok: [sp-os-node05.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') ok: [sp-os-node06.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') ok: [sp-os-node08.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') ok: [sp-os-node09.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') ok: [sp-os-node07.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } ok: [sp-os-node10.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') ok: [sp-os-node12.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') ok: [sp-os-node11.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } (1, '\n{"msg": "", "failed": true, "state": "unknown", "changed": false, "invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}}\n', '') FAILED - RETRYING: Create credentials for registry auth (3 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "msg": "", "retries": 4, "state": "unknown" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node04.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (1, '\n{"msg": "", "failed": true, "state": "unknown", "changed": false, "invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}}\n', '') FAILED - RETRYING: Create credentials for registry auth (2 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "msg": "", "retries": 4, "state": "unknown" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node04.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (1, '\n{"msg": "", "failed": true, "state": "unknown", "changed": false, "invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}}\n', '') FAILED - RETRYING: Create credentials for registry auth (1 retries left).Result was: { "attempts": 3, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "msg": "", "retries": 4, "state": "unknown" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node04.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (1, '\n{"msg": "", "failed": true, "state": "unknown", "changed": false, "invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}}\n', '') fatal: [sp-os-node04.os.ad.scanplus.de]: FAILED! => { "attempts": 3, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "msg": "", "state": "unknown" } TASK [openshift_node : Create credentials for any additional registries] **************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/registry_auth.yml:33 Wednesday 09 January 2019 15:50:10 +0100 (0:01:37.569) 0:10:45.037 ***** TASK [openshift_node : Setup ro mount of /root/.docker for containerized hosts] ********************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/registry_auth.yml:54 Wednesday 09 January 2019 15:50:12 +0100 (0:00:01.977) 0:10:47.014 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-infra02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node02.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node03.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node05.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node06.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node07.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node08.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node09.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node10.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node11.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [sp-os-node12.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Restart nodes] ******************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [restart node] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:32 Wednesday 09 January 2019 15:50:14 +0100 (0:00:02.114) 0:10:49.129 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TimeoutStopUSec": "1min 30s", "ControlGroup": "/system.slice/atomic-openshift-node.service", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ActiveExitTimestamp": "Wed 2019-01-09 14:50:47 CET", "ExecMainCode": "0", "UnitFileState": "enabled", "ExecMainPID": "124523", "LimitSIGPENDING": "63379", "FileDescriptorStoreMax": "0", "LoadState": "loaded", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "18828223427", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "18828223481", "StandardError": "inherit", "AssertTimestamp": "Wed 2019-01-09 14:50:47 CET", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "InactiveExitTimestamp": "Wed 2019-01-09 14:50:47 CET", "WatchdogTimestamp": "Wed 2019-01-09 14:50:47 CET", "NoNewPrivileges": "no", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "Before": "multi-user.target shutdown.target", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "18827872248", "SendSIGHUP": "no", "TimeoutStartUSec": "5min", "Type": "notify", "SyslogPriority": "30", "SameProcessGroup": "no", "MountFlags": "0", "LimitNPROC": "63379", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "ExecMainStartTimestamp": "Wed 2019-01-09 14:50:47 CET", "SyslogIdentifier": "atomic-openshift-node", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "-999", "Documentation": "https://github.com/openshift/origin", "StartLimitBurst": "5", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "SecureBits": "0", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "running", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestampMonotonic": "18827869957", "MainPID": "124523", "StartupBlockIOWeight": "18446744073709551615", "ActiveEnterTimestamp": "Wed 2019-01-09 14:50:47 CET", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "active", "Nice": "0", "LimitDATA": "18446744073709551615", "UnitFilePreset": "disabled", "MemoryCurrent": "85245952", "LimitRTTIME": "18446744073709551615", "WantedBy": "multi-user.target", "TasksCurrent": "18446744073709551615", "RestartUSec": "5s", "ConditionTimestamp": "Wed 2019-01-09 14:50:47 CET", "CPUAccounting": "yes", "RemainAfterExit": "no", "RequiresMountsFor": "/var/lib/origin", "PrivateNetwork": "no", "Restart": "always", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "RefuseManualStop": "no", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "InactiveEnterTimestamp": "Wed 2019-01-09 14:50:47 CET", "StandardInput": "null", "AssertTimestampMonotonic": "18827869957", "DefaultDependencies": "yes", "Requires": "-.mount var.mount basic.target", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "Slice": "system.slice", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "main", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "PrivateTmp": "no", "OnFailureJobMode": "replace", "AssertResult": "yes", "LimitLOCKS": "18446744073709551615", "ExecMainStartTimestampMonotonic": "18827872182", "AllowIsolate": "no", "Wants": "dnsmasq.service atomic-openshift-master-api.service system.slice docker.service", "After": "dnsmasq.service basic.target system.slice docker.service var.mount -.mount chronyd.service ntpd.service systemd-journald.socket", "FailureAction": "none", "CanIsolate": "no", "Conflicts": "shutdown.target", "StandardOutput": "journal", "WorkingDirectory": "/var/lib/origin", "InactiveEnterTimestampMonotonic": "18827862621", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "Transient": "no", "IOScheduling": "0", "Description": "OpenShift Node", "ActiveExitTimestampMonotonic": "18827850105", "CanReload": "no", "ControlPID": "0", "LimitNICE": "0", "BlockIOWeight": "18446744073709551615", "Names": "atomic-openshift-node.service", "ProtectSystem": "no", "PrivateDevices": "no", "Id": "atomic-openshift-node.service"}, "invocation": {"module_args": {"daemon-reload": true, "force": null, "name": "atomic-openshift-node", "enabled": null, "daemon_reload": true, "state": "restarted", "no_block": false, "user": false, "masked": null}}, "state": "started", "changed": true, "name": "atomic-openshift-node"}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "daemon-reload": true, "daemon_reload": true, "enabled": null, "force": null, "masked": null, "name": "atomic-openshift-node", "no_block": false, "state": "restarted", "user": false } }, "name": "atomic-openshift-node", "state": "started", "status": { "ActiveEnterTimestamp": "Wed 2019-01-09 14:50:47 CET", "ActiveEnterTimestampMonotonic": "18828223481", "ActiveExitTimestamp": "Wed 2019-01-09 14:50:47 CET", "ActiveExitTimestampMonotonic": "18827850105", "ActiveState": "active", "After": "dnsmasq.service basic.target system.slice docker.service var.mount -.mount chronyd.service ntpd.service systemd-journald.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-01-09 14:50:47 CET", "AssertTimestampMonotonic": "18827869957", "Before": "multi-user.target shutdown.target", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-01-09 14:50:47 CET", "ConditionTimestampMonotonic": "18827869957", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/atomic-openshift-node.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "OpenShift Node", "DevicePolicy": "auto", "Documentation": "https://github.com/openshift/origin", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "124523", "ExecMainStartTimestamp": "Wed 2019-01-09 14:50:47 CET", "ExecMainStartTimestampMonotonic": "18827872182", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "atomic-openshift-node.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-01-09 14:50:47 CET", "InactiveEnterTimestampMonotonic": "18827862621", "InactiveExitTimestamp": "Wed 2019-01-09 14:50:47 CET", "InactiveExitTimestampMonotonic": "18827872248", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "63379", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "63379", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "124523", "MemoryAccounting": "yes", "MemoryCurrent": "85245952", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "atomic-openshift-node.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "-999", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "-.mount var.mount basic.target", "RequiresMountsFor": "/var/lib/origin", "Restart": "always", "RestartUSec": "5s", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogIdentifier": "atomic-openshift-node", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "dnsmasq.service atomic-openshift-master-api.service system.slice docker.service", "WatchdogTimestamp": "Wed 2019-01-09 14:50:47 CET", "WatchdogTimestampMonotonic": "18828223427", "WatchdogUSec": "0", "WorkingDirectory": "/var/lib/origin" } } TASK [Wait for node to be ready] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:38 Wednesday 09 January 2019 15:50:15 +0100 (0:00:01.092) 0:10:50.221 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-master01.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-master01.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "4", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.240"}, {"type": "Hostname", "address": "sp-os-master01.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.11.0+d4cacc0", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.11.0+d4cacc0", "bootID": "4b07ed97-d29a-4074-97b0-b2c664b8d325", "osImage": "OpenShift", "architecture": "amd64", "systemUUID": "422AFDB1-25A7-1A07-6E08-A455AF861E9A", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "4", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 806835916, "names": ["registry.redhat.io/openshift3/ose-control-plane@sha256:01e44374022557bcb5976ff43056196db2cfee87e978a972c1b8f2b111c481ca", "registry.redhat.io/openshift3/ose-control-plane:v3.11"]}, {"sizeBytes": 788614541, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10"]}, {"sizeBytes": 788612067, "names": ["registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10"]}, {"sizeBytes": 507254763, "names": ["172.18.132.126:5000/netbox:latest"]}, {"sizeBytes": 465794074, "names": ["registry.access.redhat.com/openshift3/ose-web-console@sha256:ee9cae03904b7aaae3ef20af5f4a0dc2ba58ee83f461934c3250530877247c2b", "registry.access.redhat.com/openshift3/ose-web-console:v3.9.41"]}, {"sizeBytes": 317901894, "names": ["registry.access.redhat.com/openshift3/ose-web-console@sha256:638547b800fce0c688ea899d0ec5c6fc6cd2e4204a9c4d93c9aa1aa41d8ac976", "registry.access.redhat.com/openshift3/ose-web-console:v3.10"]}, {"sizeBytes": 311764319, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:d4a999f6871a1f1903a5fcd97b33bd0b1906230d6e9e8c3b9e9a895dd3126aaa", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.10"]}, {"sizeBytes": 287585918, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:e1fb7342be7dfd44f144fee9419939443346874d3081c02008e264217e6af0f3", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.9.41"]}, {"sizeBytes": 286681664, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34"]}, {"sizeBytes": 286138919, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7"]}, {"sizeBytes": 283428870, "names": ["registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:32d81f64dda7fad89f0044f538119be33fc3fb77728d777489bfb4c7c8bcb7d0", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.10"]}, {"sizeBytes": 273929397, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:81b3a19c210fa486b88421fcc02f259246d82c934c8bef90d3e30d02e709e16d"]}, {"sizeBytes": 273917754, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:d010b3011fd33e9fbe6ddd94b53678ac892a1bc4aa0adadf0bffcf382d128c76"]}, {"sizeBytes": 273917754, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:022e214f2af6eca6615c674c030a83630c421d4501cdaf561b2a8f349129c76f"]}, {"sizeBytes": 273902497, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:95d814ba7cb7718e4ba153ef0b54f6180562ff4a93b34976a9c038064a1441bb", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.7"]}, {"sizeBytes": 268799199, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:819b03bb54d3f914f7fcd1e5cc7053197a67e54871f84c9028bf5f802651f881"]}, {"sizeBytes": 258848031, "names": ["registry.redhat.io/rhel7/etcd@sha256:6f5b73f472277b9b3f66148bf20247e33f04121236ad25715c1c272af29e620c", "registry.redhat.io/rhel7/etcd:3.2.22"]}, {"sizeBytes": 256010737, "names": ["registry.access.redhat.com/rhel7/etcd@sha256:2129d5aa0655c0eb614a94ee5792dca9721e64217509713a6bcbfd2160f9cbfa", "registry.access.redhat.com/rhel7/etcd:3.2.22"]}, {"sizeBytes": 238134012, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:7cae0b38c12a11b07c47d4b50159cfd75b6b1a5620613957c26df10ed01e448d", "registry.access.redhat.com/openshift3/ose-pod:v3.11.51"]}, {"sizeBytes": 227525342, "names": ["registry.access.redhat.com/openshift3/logging-curator@sha256:fad9394a52bc33f153588ce8040fca3e3284a620f5e4000b4af68167c4874644", "registry.access.redhat.com/openshift3/logging-curator:v3.7"]}, {"sizeBytes": 222046071, "names": ["registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34"]}, {"sizeBytes": 214236553, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34"]}, {"sizeBytes": 214175104, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41"]}, {"sizeBytes": 208859100, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23"]}, {"sizeBytes": 89225603, "names": ["docker.io/python@sha256:7c3028aa4b9a30a34ce778b1fd4f460c9cdf174515a94641a89ef40c115b51e5", "docker.io/python:3.6-alpine"]}, {"sizeBytes": 54277621, "names": ["docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590", "docker.io/nginx:1.11-alpine"]}, {"sizeBytes": 37824213, "names": ["docker.io/postgres@sha256:bf87ee22821e1bc5cedd5da2def1700685a9e3828605b31162d8f04e16c06385", "docker.io/postgres:9.6-alpine"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-09-13T20:36:53Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:50:15Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-09-13T20:36:53Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:50:15Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-09-13T20:36:53Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:50:15Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:50:15Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:50:15Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T19:07:29Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:50:15Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-master01.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-master01.os.ad.scanplus.de", "labels": {"logging-infra-fluentd": "true", "beta.kubernetes.io/os": "linux", "node-role.kubernetes.io/master": "true", "kubernetes.io/hostname": "sp-os-master01.os.ad.scanplus.de", "openshift-infra": "apiserver", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93870913", "creationTimestamp": "2018-01-31T13:07:23Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "c19d13c937c46024e9f6477bffaa02a4"}, "selfLink": "/api/v1/nodes/sp-os-master01.os.ad.scanplus.de", "uid": "ad287084-0687-11e8-8e46-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (36 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-master01.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-master01.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "c19d13c937c46024e9f6477bffaa02a4", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-01-31T13:07:23Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-master01.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/master": "true", "openshift-infra": "apiserver" }, "name": "sp-os-master01.os.ad.scanplus.de", "resourceVersion": "93870913", "selfLink": "/api/v1/nodes/sp-os-master01.os.ad.scanplus.de", "uid": "ad287084-0687-11e8-8e46-005056aa3492" }, "spec": { "externalID": "sp-os-master01.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.240", "type": "InternalIP" }, { "address": "sp-os-master01.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "4", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "4", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:50:15Z", "lastTransitionTime": "2018-09-13T20:36:53Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:50:15Z", "lastTransitionTime": "2018-09-13T20:36:53Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:15Z", "lastTransitionTime": "2018-09-13T20:36:53Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:15Z", "lastTransitionTime": "2019-01-09T14:50:15Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:50:15Z", "lastTransitionTime": "2018-09-13T19:07:29Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "registry.redhat.io/openshift3/ose-control-plane@sha256:01e44374022557bcb5976ff43056196db2cfee87e978a972c1b8f2b111c481ca", "registry.redhat.io/openshift3/ose-control-plane:v3.11" ], "sizeBytes": 806835916 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10" ], "sizeBytes": 788614541 }, { "names": [ "registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10" ], "sizeBytes": 788612067 }, { "names": [ "172.18.132.126:5000/netbox:latest" ], "sizeBytes": 507254763 }, { "names": [ "registry.access.redhat.com/openshift3/ose-web-console@sha256:ee9cae03904b7aaae3ef20af5f4a0dc2ba58ee83f461934c3250530877247c2b", "registry.access.redhat.com/openshift3/ose-web-console:v3.9.41" ], "sizeBytes": 465794074 }, { "names": [ "registry.access.redhat.com/openshift3/ose-web-console@sha256:638547b800fce0c688ea899d0ec5c6fc6cd2e4204a9c4d93c9aa1aa41d8ac976", "registry.access.redhat.com/openshift3/ose-web-console:v3.10" ], "sizeBytes": 317901894 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:d4a999f6871a1f1903a5fcd97b33bd0b1906230d6e9e8c3b9e9a895dd3126aaa", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.10" ], "sizeBytes": 311764319 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:e1fb7342be7dfd44f144fee9419939443346874d3081c02008e264217e6af0f3", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.9.41" ], "sizeBytes": 287585918 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34" ], "sizeBytes": 286681664 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7" ], "sizeBytes": 286138919 }, { "names": [ "registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:32d81f64dda7fad89f0044f538119be33fc3fb77728d777489bfb4c7c8bcb7d0", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.10" ], "sizeBytes": 283428870 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:81b3a19c210fa486b88421fcc02f259246d82c934c8bef90d3e30d02e709e16d" ], "sizeBytes": 273929397 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:d010b3011fd33e9fbe6ddd94b53678ac892a1bc4aa0adadf0bffcf382d128c76" ], "sizeBytes": 273917754 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:022e214f2af6eca6615c674c030a83630c421d4501cdaf561b2a8f349129c76f" ], "sizeBytes": 273917754 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:95d814ba7cb7718e4ba153ef0b54f6180562ff4a93b34976a9c038064a1441bb", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.7" ], "sizeBytes": 273902497 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:819b03bb54d3f914f7fcd1e5cc7053197a67e54871f84c9028bf5f802651f881" ], "sizeBytes": 268799199 }, { "names": [ "registry.redhat.io/rhel7/etcd@sha256:6f5b73f472277b9b3f66148bf20247e33f04121236ad25715c1c272af29e620c", "registry.redhat.io/rhel7/etcd:3.2.22" ], "sizeBytes": 258848031 }, { "names": [ "registry.access.redhat.com/rhel7/etcd@sha256:2129d5aa0655c0eb614a94ee5792dca9721e64217509713a6bcbfd2160f9cbfa", "registry.access.redhat.com/rhel7/etcd:3.2.22" ], "sizeBytes": 256010737 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:7cae0b38c12a11b07c47d4b50159cfd75b6b1a5620613957c26df10ed01e448d", "registry.access.redhat.com/openshift3/ose-pod:v3.11.51" ], "sizeBytes": 238134012 }, { "names": [ "registry.access.redhat.com/openshift3/logging-curator@sha256:fad9394a52bc33f153588ce8040fca3e3284a620f5e4000b4af68167c4874644", "registry.access.redhat.com/openshift3/logging-curator:v3.7" ], "sizeBytes": 227525342 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34" ], "sizeBytes": 222046071 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34" ], "sizeBytes": 214236553 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41" ], "sizeBytes": 214175104 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23" ], "sizeBytes": 208859100 }, { "names": [ "docker.io/python@sha256:7c3028aa4b9a30a34ce778b1fd4f460c9cdf174515a94641a89ef40c115b51e5", "docker.io/python:3.6-alpine" ], "sizeBytes": 89225603 }, { "names": [ "docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590", "docker.io/nginx:1.11-alpine" ], "sizeBytes": 54277621 }, { "names": [ "docker.io/postgres@sha256:bf87ee22821e1bc5cedd5da2def1700685a9e3828605b31162d8f04e16c06385", "docker.io/postgres:9.6-alpine" ], "sizeBytes": 37824213 } ], "nodeInfo": { "architecture": "amd64", "bootID": "4b07ed97-d29a-4074-97b0-b2c664b8d325", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.11.0+d4cacc0", "kubeletVersion": "v1.11.0+d4cacc0", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "OpenShift", "systemUUID": "422AFDB1-25A7-1A07-6E08-A455AF861E9A" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-master01.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-master01.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "4", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.240"}, {"type": "Hostname", "address": "sp-os-master01.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.11.0+d4cacc0", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.11.0+d4cacc0", "bootID": "4b07ed97-d29a-4074-97b0-b2c664b8d325", "osImage": "OpenShift", "architecture": "amd64", "systemUUID": "422AFDB1-25A7-1A07-6E08-A455AF861E9A", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "4", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 806835916, "names": ["registry.redhat.io/openshift3/ose-control-plane@sha256:01e44374022557bcb5976ff43056196db2cfee87e978a972c1b8f2b111c481ca", "registry.redhat.io/openshift3/ose-control-plane:v3.11"]}, {"sizeBytes": 788614541, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10"]}, {"sizeBytes": 788612067, "names": ["registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10"]}, {"sizeBytes": 507254763, "names": ["172.18.132.126:5000/netbox:latest"]}, {"sizeBytes": 465794074, "names": ["registry.access.redhat.com/openshift3/ose-web-console@sha256:ee9cae03904b7aaae3ef20af5f4a0dc2ba58ee83f461934c3250530877247c2b", "registry.access.redhat.com/openshift3/ose-web-console:v3.9.41"]}, {"sizeBytes": 317901894, "names": ["registry.access.redhat.com/openshift3/ose-web-console@sha256:638547b800fce0c688ea899d0ec5c6fc6cd2e4204a9c4d93c9aa1aa41d8ac976", "registry.access.redhat.com/openshift3/ose-web-console:v3.10"]}, {"sizeBytes": 311764319, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:d4a999f6871a1f1903a5fcd97b33bd0b1906230d6e9e8c3b9e9a895dd3126aaa", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.10"]}, {"sizeBytes": 287585918, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:e1fb7342be7dfd44f144fee9419939443346874d3081c02008e264217e6af0f3", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.9.41"]}, {"sizeBytes": 286681664, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34"]}, {"sizeBytes": 286138919, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7"]}, {"sizeBytes": 283428870, "names": ["registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:32d81f64dda7fad89f0044f538119be33fc3fb77728d777489bfb4c7c8bcb7d0", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.10"]}, {"sizeBytes": 273929397, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:81b3a19c210fa486b88421fcc02f259246d82c934c8bef90d3e30d02e709e16d"]}, {"sizeBytes": 273917754, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:d010b3011fd33e9fbe6ddd94b53678ac892a1bc4aa0adadf0bffcf382d128c76"]}, {"sizeBytes": 273917754, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:022e214f2af6eca6615c674c030a83630c421d4501cdaf561b2a8f349129c76f"]}, {"sizeBytes": 273902497, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:95d814ba7cb7718e4ba153ef0b54f6180562ff4a93b34976a9c038064a1441bb", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.7"]}, {"sizeBytes": 268799199, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:819b03bb54d3f914f7fcd1e5cc7053197a67e54871f84c9028bf5f802651f881"]}, {"sizeBytes": 258848031, "names": ["registry.redhat.io/rhel7/etcd@sha256:6f5b73f472277b9b3f66148bf20247e33f04121236ad25715c1c272af29e620c", "registry.redhat.io/rhel7/etcd:3.2.22"]}, {"sizeBytes": 256010737, "names": ["registry.access.redhat.com/rhel7/etcd@sha256:2129d5aa0655c0eb614a94ee5792dca9721e64217509713a6bcbfd2160f9cbfa", "registry.access.redhat.com/rhel7/etcd:3.2.22"]}, {"sizeBytes": 238134012, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:7cae0b38c12a11b07c47d4b50159cfd75b6b1a5620613957c26df10ed01e448d", "registry.access.redhat.com/openshift3/ose-pod:v3.11.51"]}, {"sizeBytes": 227525342, "names": ["registry.access.redhat.com/openshift3/logging-curator@sha256:fad9394a52bc33f153588ce8040fca3e3284a620f5e4000b4af68167c4874644", "registry.access.redhat.com/openshift3/logging-curator:v3.7"]}, {"sizeBytes": 222046071, "names": ["registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34"]}, {"sizeBytes": 214236553, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34"]}, {"sizeBytes": 214175104, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41"]}, {"sizeBytes": 208859100, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23"]}, {"sizeBytes": 89225603, "names": ["docker.io/python@sha256:7c3028aa4b9a30a34ce778b1fd4f460c9cdf174515a94641a89ef40c115b51e5", "docker.io/python:3.6-alpine"]}, {"sizeBytes": 54277621, "names": ["docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590", "docker.io/nginx:1.11-alpine"]}, {"sizeBytes": 37824213, "names": ["docker.io/postgres@sha256:bf87ee22821e1bc5cedd5da2def1700685a9e3828605b31162d8f04e16c06385", "docker.io/postgres:9.6-alpine"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-09-13T20:36:53Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:50:15Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-09-13T20:36:53Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:50:15Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-09-13T20:36:53Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:50:15Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:50:15Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:50:15Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T19:07:29Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:50:15Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-master01.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-master01.os.ad.scanplus.de", "labels": {"logging-infra-fluentd": "true", "beta.kubernetes.io/os": "linux", "node-role.kubernetes.io/master": "true", "kubernetes.io/hostname": "sp-os-master01.os.ad.scanplus.de", "openshift-infra": "apiserver", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93870913", "creationTimestamp": "2018-01-31T13:07:23Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "c19d13c937c46024e9f6477bffaa02a4"}, "selfLink": "/api/v1/nodes/sp-os-master01.os.ad.scanplus.de", "uid": "ad287084-0687-11e8-8e46-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (35 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-master01.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-master01.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "c19d13c937c46024e9f6477bffaa02a4", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-01-31T13:07:23Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-master01.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/master": "true", "openshift-infra": "apiserver" }, "name": "sp-os-master01.os.ad.scanplus.de", "resourceVersion": "93870913", "selfLink": "/api/v1/nodes/sp-os-master01.os.ad.scanplus.de", "uid": "ad287084-0687-11e8-8e46-005056aa3492" }, "spec": { "externalID": "sp-os-master01.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.240", "type": "InternalIP" }, { "address": "sp-os-master01.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "4", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "4", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:50:15Z", "lastTransitionTime": "2018-09-13T20:36:53Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:50:15Z", "lastTransitionTime": "2018-09-13T20:36:53Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:15Z", "lastTransitionTime": "2018-09-13T20:36:53Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:15Z", "lastTransitionTime": "2019-01-09T14:50:15Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:50:15Z", "lastTransitionTime": "2018-09-13T19:07:29Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "registry.redhat.io/openshift3/ose-control-plane@sha256:01e44374022557bcb5976ff43056196db2cfee87e978a972c1b8f2b111c481ca", "registry.redhat.io/openshift3/ose-control-plane:v3.11" ], "sizeBytes": 806835916 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10" ], "sizeBytes": 788614541 }, { "names": [ "registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10" ], "sizeBytes": 788612067 }, { "names": [ "172.18.132.126:5000/netbox:latest" ], "sizeBytes": 507254763 }, { "names": [ "registry.access.redhat.com/openshift3/ose-web-console@sha256:ee9cae03904b7aaae3ef20af5f4a0dc2ba58ee83f461934c3250530877247c2b", "registry.access.redhat.com/openshift3/ose-web-console:v3.9.41" ], "sizeBytes": 465794074 }, { "names": [ "registry.access.redhat.com/openshift3/ose-web-console@sha256:638547b800fce0c688ea899d0ec5c6fc6cd2e4204a9c4d93c9aa1aa41d8ac976", "registry.access.redhat.com/openshift3/ose-web-console:v3.10" ], "sizeBytes": 317901894 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:d4a999f6871a1f1903a5fcd97b33bd0b1906230d6e9e8c3b9e9a895dd3126aaa", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.10" ], "sizeBytes": 311764319 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:e1fb7342be7dfd44f144fee9419939443346874d3081c02008e264217e6af0f3", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.9.41" ], "sizeBytes": 287585918 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34" ], "sizeBytes": 286681664 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7" ], "sizeBytes": 286138919 }, { "names": [ "registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:32d81f64dda7fad89f0044f538119be33fc3fb77728d777489bfb4c7c8bcb7d0", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.10" ], "sizeBytes": 283428870 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:81b3a19c210fa486b88421fcc02f259246d82c934c8bef90d3e30d02e709e16d" ], "sizeBytes": 273929397 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:d010b3011fd33e9fbe6ddd94b53678ac892a1bc4aa0adadf0bffcf382d128c76" ], "sizeBytes": 273917754 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:022e214f2af6eca6615c674c030a83630c421d4501cdaf561b2a8f349129c76f" ], "sizeBytes": 273917754 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:95d814ba7cb7718e4ba153ef0b54f6180562ff4a93b34976a9c038064a1441bb", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.7" ], "sizeBytes": 273902497 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:819b03bb54d3f914f7fcd1e5cc7053197a67e54871f84c9028bf5f802651f881" ], "sizeBytes": 268799199 }, { "names": [ "registry.redhat.io/rhel7/etcd@sha256:6f5b73f472277b9b3f66148bf20247e33f04121236ad25715c1c272af29e620c", "registry.redhat.io/rhel7/etcd:3.2.22" ], "sizeBytes": 258848031 }, { "names": [ "registry.access.redhat.com/rhel7/etcd@sha256:2129d5aa0655c0eb614a94ee5792dca9721e64217509713a6bcbfd2160f9cbfa", "registry.access.redhat.com/rhel7/etcd:3.2.22" ], "sizeBytes": 256010737 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:7cae0b38c12a11b07c47d4b50159cfd75b6b1a5620613957c26df10ed01e448d", "registry.access.redhat.com/openshift3/ose-pod:v3.11.51" ], "sizeBytes": 238134012 }, { "names": [ "registry.access.redhat.com/openshift3/logging-curator@sha256:fad9394a52bc33f153588ce8040fca3e3284a620f5e4000b4af68167c4874644", "registry.access.redhat.com/openshift3/logging-curator:v3.7" ], "sizeBytes": 227525342 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34" ], "sizeBytes": 222046071 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34" ], "sizeBytes": 214236553 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41" ], "sizeBytes": 214175104 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23" ], "sizeBytes": 208859100 }, { "names": [ "docker.io/python@sha256:7c3028aa4b9a30a34ce778b1fd4f460c9cdf174515a94641a89ef40c115b51e5", "docker.io/python:3.6-alpine" ], "sizeBytes": 89225603 }, { "names": [ "docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590", "docker.io/nginx:1.11-alpine" ], "sizeBytes": 54277621 }, { "names": [ "docker.io/postgres@sha256:bf87ee22821e1bc5cedd5da2def1700685a9e3828605b31162d8f04e16c06385", "docker.io/postgres:9.6-alpine" ], "sizeBytes": 37824213 } ], "nodeInfo": { "architecture": "amd64", "bootID": "4b07ed97-d29a-4074-97b0-b2c664b8d325", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.11.0+d4cacc0", "kubeletVersion": "v1.11.0+d4cacc0", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "OpenShift", "systemUUID": "422AFDB1-25A7-1A07-6E08-A455AF861E9A" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-master01.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-master01.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "4", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.240"}, {"type": "Hostname", "address": "sp-os-master01.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.11.0+d4cacc0", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.11.0+d4cacc0", "bootID": "4b07ed97-d29a-4074-97b0-b2c664b8d325", "osImage": "OpenShift", "architecture": "amd64", "systemUUID": "422AFDB1-25A7-1A07-6E08-A455AF861E9A", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "4", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 806835916, "names": ["registry.redhat.io/openshift3/ose-control-plane@sha256:01e44374022557bcb5976ff43056196db2cfee87e978a972c1b8f2b111c481ca", "registry.redhat.io/openshift3/ose-control-plane:v3.11"]}, {"sizeBytes": 788614541, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10"]}, {"sizeBytes": 788612067, "names": ["registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10"]}, {"sizeBytes": 507254763, "names": ["172.18.132.126:5000/netbox:latest"]}, {"sizeBytes": 465794074, "names": ["registry.access.redhat.com/openshift3/ose-web-console@sha256:ee9cae03904b7aaae3ef20af5f4a0dc2ba58ee83f461934c3250530877247c2b", "registry.access.redhat.com/openshift3/ose-web-console:v3.9.41"]}, {"sizeBytes": 317901894, "names": ["registry.access.redhat.com/openshift3/ose-web-console@sha256:638547b800fce0c688ea899d0ec5c6fc6cd2e4204a9c4d93c9aa1aa41d8ac976", "registry.access.redhat.com/openshift3/ose-web-console:v3.10"]}, {"sizeBytes": 311764319, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:d4a999f6871a1f1903a5fcd97b33bd0b1906230d6e9e8c3b9e9a895dd3126aaa", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.10"]}, {"sizeBytes": 287585918, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:e1fb7342be7dfd44f144fee9419939443346874d3081c02008e264217e6af0f3", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.9.41"]}, {"sizeBytes": 286681664, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34"]}, {"sizeBytes": 286138919, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7"]}, {"sizeBytes": 283428870, "names": ["registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:32d81f64dda7fad89f0044f538119be33fc3fb77728d777489bfb4c7c8bcb7d0", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.10"]}, {"sizeBytes": 273929397, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:81b3a19c210fa486b88421fcc02f259246d82c934c8bef90d3e30d02e709e16d"]}, {"sizeBytes": 273917754, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:022e214f2af6eca6615c674c030a83630c421d4501cdaf561b2a8f349129c76f"]}, {"sizeBytes": 273917754, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:d010b3011fd33e9fbe6ddd94b53678ac892a1bc4aa0adadf0bffcf382d128c76"]}, {"sizeBytes": 273902497, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:95d814ba7cb7718e4ba153ef0b54f6180562ff4a93b34976a9c038064a1441bb", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.7"]}, {"sizeBytes": 268799199, "names": ["registry.access.redhat.com/openshift3/ose-service-catalog@sha256:819b03bb54d3f914f7fcd1e5cc7053197a67e54871f84c9028bf5f802651f881"]}, {"sizeBytes": 258848031, "names": ["registry.redhat.io/rhel7/etcd@sha256:6f5b73f472277b9b3f66148bf20247e33f04121236ad25715c1c272af29e620c", "registry.redhat.io/rhel7/etcd:3.2.22"]}, {"sizeBytes": 256010737, "names": ["registry.access.redhat.com/rhel7/etcd@sha256:2129d5aa0655c0eb614a94ee5792dca9721e64217509713a6bcbfd2160f9cbfa", "registry.access.redhat.com/rhel7/etcd:3.2.22"]}, {"sizeBytes": 238134012, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:7cae0b38c12a11b07c47d4b50159cfd75b6b1a5620613957c26df10ed01e448d", "registry.access.redhat.com/openshift3/ose-pod:v3.11.51"]}, {"sizeBytes": 227525342, "names": ["registry.access.redhat.com/openshift3/logging-curator@sha256:fad9394a52bc33f153588ce8040fca3e3284a620f5e4000b4af68167c4874644", "registry.access.redhat.com/openshift3/logging-curator:v3.7"]}, {"sizeBytes": 222046071, "names": ["registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34"]}, {"sizeBytes": 214236553, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34"]}, {"sizeBytes": 214175104, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41"]}, {"sizeBytes": 208859100, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23"]}, {"sizeBytes": 89225603, "names": ["docker.io/python@sha256:7c3028aa4b9a30a34ce778b1fd4f460c9cdf174515a94641a89ef40c115b51e5", "docker.io/python:3.6-alpine"]}, {"sizeBytes": 54277621, "names": ["docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590", "docker.io/nginx:1.11-alpine"]}, {"sizeBytes": 37824213, "names": ["docker.io/postgres@sha256:bf87ee22821e1bc5cedd5da2def1700685a9e3828605b31162d8f04e16c06385", "docker.io/postgres:9.6-alpine"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-09-13T20:36:53Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:50:25Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-09-13T20:36:53Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:50:25Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-09-13T20:36:53Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:50:25Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "True", "lastTransitionTime": "2019-01-09T14:50:25Z", "reason": "KubeletReady", "lastHeartbeatTime": "2019-01-09T14:50:25Z", "message": "kubelet is posting ready status", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T19:07:29Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:50:25Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-master01.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-master01.os.ad.scanplus.de", "labels": {"logging-infra-fluentd": "true", "beta.kubernetes.io/os": "linux", "node-role.kubernetes.io/master": "true", "kubernetes.io/hostname": "sp-os-master01.os.ad.scanplus.de", "openshift-infra": "apiserver", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93870960", "creationTimestamp": "2018-01-31T13:07:23Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "c19d13c937c46024e9f6477bffaa02a4"}, "selfLink": "/api/v1/nodes/sp-os-master01.os.ad.scanplus.de", "uid": "ad287084-0687-11e8-8e46-005056aa3492"}}]}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de -> sp-os-master01.os.ad.scanplus.de] => { "attempts": 3, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-master01.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-master01.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "c19d13c937c46024e9f6477bffaa02a4", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-01-31T13:07:23Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-master01.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/master": "true", "openshift-infra": "apiserver" }, "name": "sp-os-master01.os.ad.scanplus.de", "resourceVersion": "93870960", "selfLink": "/api/v1/nodes/sp-os-master01.os.ad.scanplus.de", "uid": "ad287084-0687-11e8-8e46-005056aa3492" }, "spec": { "externalID": "sp-os-master01.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.240", "type": "InternalIP" }, { "address": "sp-os-master01.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "4", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "4", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:50:25Z", "lastTransitionTime": "2018-09-13T20:36:53Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:50:25Z", "lastTransitionTime": "2018-09-13T20:36:53Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:25Z", "lastTransitionTime": "2018-09-13T20:36:53Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:25Z", "lastTransitionTime": "2019-01-09T14:50:25Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:50:25Z", "lastTransitionTime": "2018-09-13T19:07:29Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "registry.redhat.io/openshift3/ose-control-plane@sha256:01e44374022557bcb5976ff43056196db2cfee87e978a972c1b8f2b111c481ca", "registry.redhat.io/openshift3/ose-control-plane:v3.11" ], "sizeBytes": 806835916 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10" ], "sizeBytes": 788614541 }, { "names": [ "registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10" ], "sizeBytes": 788612067 }, { "names": [ "172.18.132.126:5000/netbox:latest" ], "sizeBytes": 507254763 }, { "names": [ "registry.access.redhat.com/openshift3/ose-web-console@sha256:ee9cae03904b7aaae3ef20af5f4a0dc2ba58ee83f461934c3250530877247c2b", "registry.access.redhat.com/openshift3/ose-web-console:v3.9.41" ], "sizeBytes": 465794074 }, { "names": [ "registry.access.redhat.com/openshift3/ose-web-console@sha256:638547b800fce0c688ea899d0ec5c6fc6cd2e4204a9c4d93c9aa1aa41d8ac976", "registry.access.redhat.com/openshift3/ose-web-console:v3.10" ], "sizeBytes": 317901894 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:d4a999f6871a1f1903a5fcd97b33bd0b1906230d6e9e8c3b9e9a895dd3126aaa", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.10" ], "sizeBytes": 311764319 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:e1fb7342be7dfd44f144fee9419939443346874d3081c02008e264217e6af0f3", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.9.41" ], "sizeBytes": 287585918 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34" ], "sizeBytes": 286681664 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7" ], "sizeBytes": 286138919 }, { "names": [ "registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:32d81f64dda7fad89f0044f538119be33fc3fb77728d777489bfb4c7c8bcb7d0", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.10" ], "sizeBytes": 283428870 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:81b3a19c210fa486b88421fcc02f259246d82c934c8bef90d3e30d02e709e16d" ], "sizeBytes": 273929397 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:022e214f2af6eca6615c674c030a83630c421d4501cdaf561b2a8f349129c76f" ], "sizeBytes": 273917754 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:d010b3011fd33e9fbe6ddd94b53678ac892a1bc4aa0adadf0bffcf382d128c76" ], "sizeBytes": 273917754 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:95d814ba7cb7718e4ba153ef0b54f6180562ff4a93b34976a9c038064a1441bb", "registry.access.redhat.com/openshift3/ose-service-catalog:v3.7" ], "sizeBytes": 273902497 }, { "names": [ "registry.access.redhat.com/openshift3/ose-service-catalog@sha256:819b03bb54d3f914f7fcd1e5cc7053197a67e54871f84c9028bf5f802651f881" ], "sizeBytes": 268799199 }, { "names": [ "registry.redhat.io/rhel7/etcd@sha256:6f5b73f472277b9b3f66148bf20247e33f04121236ad25715c1c272af29e620c", "registry.redhat.io/rhel7/etcd:3.2.22" ], "sizeBytes": 258848031 }, { "names": [ "registry.access.redhat.com/rhel7/etcd@sha256:2129d5aa0655c0eb614a94ee5792dca9721e64217509713a6bcbfd2160f9cbfa", "registry.access.redhat.com/rhel7/etcd:3.2.22" ], "sizeBytes": 256010737 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:7cae0b38c12a11b07c47d4b50159cfd75b6b1a5620613957c26df10ed01e448d", "registry.access.redhat.com/openshift3/ose-pod:v3.11.51" ], "sizeBytes": 238134012 }, { "names": [ "registry.access.redhat.com/openshift3/logging-curator@sha256:fad9394a52bc33f153588ce8040fca3e3284a620f5e4000b4af68167c4874644", "registry.access.redhat.com/openshift3/logging-curator:v3.7" ], "sizeBytes": 227525342 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34" ], "sizeBytes": 222046071 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34" ], "sizeBytes": 214236553 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41" ], "sizeBytes": 214175104 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23" ], "sizeBytes": 208859100 }, { "names": [ "docker.io/python@sha256:7c3028aa4b9a30a34ce778b1fd4f460c9cdf174515a94641a89ef40c115b51e5", "docker.io/python:3.6-alpine" ], "sizeBytes": 89225603 }, { "names": [ "docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590", "docker.io/nginx:1.11-alpine" ], "sizeBytes": 54277621 }, { "names": [ "docker.io/postgres@sha256:bf87ee22821e1bc5cedd5da2def1700685a9e3828605b31162d8f04e16c06385", "docker.io/postgres:9.6-alpine" ], "sizeBytes": 37824213 } ], "nodeInfo": { "architecture": "amd64", "bootID": "4b07ed97-d29a-4074-97b0-b2c664b8d325", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.11.0+d4cacc0", "kubeletVersion": "v1.11.0+d4cacc0", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "OpenShift", "systemUUID": "422AFDB1-25A7-1A07-6E08-A455AF861E9A" } } } ], "returncode": 0 }, "state": "list" } META: ran handlers META: ran handlers PLAY [Restart nodes] ******************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [restart node] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:32 Wednesday 09 January 2019 15:50:27 +0100 (0:00:11.496) 0:11:01.718 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TimeoutStopUSec": "1min 30s", "ControlGroup": "/system.slice/atomic-openshift-node.service", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ActiveExitTimestamp": "Wed 2019-01-09 14:50:59 CET", "ExecMainCode": "0", "UnitFileState": "enabled", "ExecMainPID": "25346", "LimitSIGPENDING": "23059", "FileDescriptorStoreMax": "0", "LoadState": "loaded", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "5435183346459", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "5435183346497", "StandardError": "inherit", "AssertTimestamp": "Wed 2019-01-09 14:50:59 CET", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "InactiveExitTimestamp": "Wed 2019-01-09 14:50:59 CET", "WatchdogTimestamp": "Wed 2019-01-09 14:50:59 CET", "NoNewPrivileges": "no", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "Before": "shutdown.target multi-user.target", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "5435183054153", "SendSIGHUP": "no", "TimeoutStartUSec": "5min", "Type": "notify", "SyslogPriority": "30", "SameProcessGroup": "no", "MountFlags": "0", "LimitNPROC": "23059", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "ExecMainStartTimestamp": "Wed 2019-01-09 14:50:59 CET", "SyslogIdentifier": "atomic-openshift-node", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "-999", "Documentation": "https://github.com/openshift/origin", "StartLimitBurst": "5", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "SecureBits": "0", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "running", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestampMonotonic": "5435183052707", "MainPID": "25346", "StartupBlockIOWeight": "18446744073709551615", "ActiveEnterTimestamp": "Wed 2019-01-09 14:50:59 CET", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "active", "Nice": "0", "LimitDATA": "18446744073709551615", "UnitFilePreset": "disabled", "MemoryCurrent": "64159744", "LimitRTTIME": "18446744073709551615", "WantedBy": "multi-user.target", "TasksCurrent": "18446744073709551615", "RestartUSec": "5s", "ConditionTimestamp": "Wed 2019-01-09 14:50:59 CET", "CPUAccounting": "yes", "RemainAfterExit": "no", "RequiresMountsFor": "/var/lib/origin", "PrivateNetwork": "no", "Restart": "always", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "RefuseManualStop": "no", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "InactiveEnterTimestamp": "Wed 2019-01-09 14:50:59 CET", "StandardInput": "null", "AssertTimestampMonotonic": "5435183052708", "DefaultDependencies": "yes", "Requires": "-.mount var.mount basic.target", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "Slice": "system.slice", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "main", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "PrivateTmp": "no", "OnFailureJobMode": "replace", "AssertResult": "yes", "LimitLOCKS": "18446744073709551615", "ExecMainStartTimestampMonotonic": "5435183054096", "AllowIsolate": "no", "Wants": "system.slice docker.service dnsmasq.service", "After": "basic.target ntpd.service dnsmasq.service chronyd.service system.slice var.mount systemd-journald.socket docker.service -.mount", "FailureAction": "none", "CanIsolate": "no", "Conflicts": "shutdown.target", "StandardOutput": "journal", "WorkingDirectory": "/var/lib/origin", "InactiveEnterTimestampMonotonic": "5435183048583", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "Transient": "no", "IOScheduling": "0", "Description": "OpenShift Node", "ActiveExitTimestampMonotonic": "5435183030687", "CanReload": "no", "ControlPID": "0", "LimitNICE": "0", "BlockIOWeight": "18446744073709551615", "Names": "atomic-openshift-node.service", "ProtectSystem": "no", "PrivateDevices": "no", "Id": "atomic-openshift-node.service"}, "invocation": {"module_args": {"daemon-reload": true, "force": null, "name": "atomic-openshift-node", "enabled": null, "daemon_reload": true, "state": "restarted", "no_block": false, "user": false, "masked": null}}, "state": "started", "changed": true, "name": "atomic-openshift-node"}\n', '') changed: [sp-os-infra01.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "daemon-reload": true, "daemon_reload": true, "enabled": null, "force": null, "masked": null, "name": "atomic-openshift-node", "no_block": false, "state": "restarted", "user": false } }, "name": "atomic-openshift-node", "state": "started", "status": { "ActiveEnterTimestamp": "Wed 2019-01-09 14:50:59 CET", "ActiveEnterTimestampMonotonic": "5435183346497", "ActiveExitTimestamp": "Wed 2019-01-09 14:50:59 CET", "ActiveExitTimestampMonotonic": "5435183030687", "ActiveState": "active", "After": "basic.target ntpd.service dnsmasq.service chronyd.service system.slice var.mount systemd-journald.socket docker.service -.mount", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-01-09 14:50:59 CET", "AssertTimestampMonotonic": "5435183052708", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-01-09 14:50:59 CET", "ConditionTimestampMonotonic": "5435183052707", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/atomic-openshift-node.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "OpenShift Node", "DevicePolicy": "auto", "Documentation": "https://github.com/openshift/origin", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "25346", "ExecMainStartTimestamp": "Wed 2019-01-09 14:50:59 CET", "ExecMainStartTimestampMonotonic": "5435183054096", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "atomic-openshift-node.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-01-09 14:50:59 CET", "InactiveEnterTimestampMonotonic": "5435183048583", "InactiveExitTimestamp": "Wed 2019-01-09 14:50:59 CET", "InactiveExitTimestampMonotonic": "5435183054153", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "23059", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "23059", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "25346", "MemoryAccounting": "yes", "MemoryCurrent": "64159744", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "atomic-openshift-node.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "-999", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "-.mount var.mount basic.target", "RequiresMountsFor": "/var/lib/origin", "Restart": "always", "RestartUSec": "5s", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogIdentifier": "atomic-openshift-node", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "system.slice docker.service dnsmasq.service", "WatchdogTimestamp": "Wed 2019-01-09 14:50:59 CET", "WatchdogTimestampMonotonic": "5435183346459", "WatchdogUSec": "0", "WorkingDirectory": "/var/lib/origin" } } TASK [Wait for node to be ready] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:38 Wednesday 09 January 2019 15:50:28 +0100 (0:00:00.817) 0:11:02.535 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-infra01.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-infra01.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "2", "memory": "5927932Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.241"}, {"type": "Hostname", "address": "sp-os-infra01.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "e8fd93e6-6af9-4cc7-984f-3312104dea6a", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422A43A6-9C35-EF0E-B5B5-5B561C56E0A1", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "2", "memory": "5825532Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1435113523, "names": ["docker.io/openshift/origin-metrics-deployer@sha256:714ac774dce21991a55746f542eb2eb635eee568c14fe7f7e2e6fcd5653e3bd6", "docker.io/openshift/origin-metrics-deployer:latest"]}, {"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1253410148, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:e8f189677c3608469dd2ef8e0b9c87a7161322a17902f6e2289aa0a77adf8869", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.9.41"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 1078227309, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:363c85bff3a7a9092d5df62ffac5a945d00b3544975631962a9c4adf80f938c3", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.7.23"]}, {"sizeBytes": 1059096712, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:3e36f0dc9e6c43b5e20e347c0e0cb590f263bc1ef7f925b7590d26a358f7f41e", "registry.access.redhat.com/openshift3/ose-deployer:v3.7.23"]}, {"sizeBytes": 1059094256, "names": ["registry.access.redhat.com/openshift3/ose@sha256:4c6d10c92d69d8445d9ede7c87ef1bb28c9e473d8624620e701e0a80d6091e92", "registry.access.redhat.com/openshift3/ose@sha256:a8652472480ccc592e774230e1b5e4dfaea3b330bee1ece452914c1830361b06", "registry.access.redhat.com/openshift3/ose:v3.7", "registry.access.redhat.com/openshift3/ose:v3.7.23"]}, {"sizeBytes": 807879920, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:f09448e7c03254b309a56ac24f1194667d17d64699da144639ccbebef7301b45", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.10"]}, {"sizeBytes": 788614541, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10", "registry.access.redhat.com/openshift3/ose-deployer:v3.10.34"]}, {"sizeBytes": 788612067, "names": ["registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10"]}, {"sizeBytes": 674269936, "names": ["registry.access.redhat.com/openshift3/logging-kibana@sha256:4c00973b15883be9a95ee9fcc0412c3ccacd19e49681790dc6f592abd1b9889a", "registry.access.redhat.com/openshift3/logging-kibana:v3.7"]}, {"sizeBytes": 538578734, "names": ["docker-registry.default.svc:5000/sp-netbox-dev/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-qa/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8"]}, {"sizeBytes": 538578478, "names": ["docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:c098692334e5da4fa911b8cce249a932baf3cb4bb02616473e17314fcdf36e91"]}, {"sizeBytes": 506897040, "names": ["docker-registry.default.svc:5000/scanplus-netbox/netbox@sha256:676e24d739f2e5d04ab551b29f0511475f77057dbf03f67089d80b912f1ef306", "docker-registry.default.svc:5000/sp-netbox-dev/netbox@sha256:676e24d739f2e5d04ab551b29f0511475f77057dbf03f67089d80b912f1ef306"]}, {"sizeBytes": 459352008, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:ca4bdf78afcf2cfce000e77e4e5173245d3222b20bfb481dbebc9d8141ad454f", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.7.23"]}, {"sizeBytes": 435574212, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:83348fb254e4de4783ca482aae13af27200aef08d1434515da24dbb8eb4b1f1b", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.9.41"]}, {"sizeBytes": 419508201, "names": ["registry.access.redhat.com/rhscl/mysql-57-rhel7@sha256:803edcbd9fda30de37a23201a3c43b7994628f946fff4c055c1c43cd22a9a4ed"]}, {"sizeBytes": 385380226, "names": ["registry.access.redhat.com/openshift3/ose-keepalived-ipfailover@sha256:30728ae0d140912f0a68c39e74f695c333f9c6cd7d746639ccc9ca7ca8d63959", "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:v3.7.23"]}, {"sizeBytes": 311278309, "names": ["registry.access.redhat.com/rhscl/postgresql-95-rhel7@sha256:d8b932fab1c9eb48a8a2f2fd94caf6dcff18a78413f621c92d280b158754d7ed"]}, {"sizeBytes": 299475138, "names": ["registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:0ebba15f6587d5270fc5fb18a7c3cdf7fb27c7e941e18564078811c1326b2a9b", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.9.41"]}, {"sizeBytes": 286901023, "names": ["docker.io/postgres@sha256:df5b5545e937ab152f2cf401fccb515d49363dfce1333c4b8b2580b6c0bbc207"]}, {"sizeBytes": 286681664, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34"]}, {"sizeBytes": 286138919, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7"]}, {"sizeBytes": 283460958, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:3a723441d5d82af63147027dd4d89d1b67fcb60bd1bc7c9bb55f4c5b8d1bc204", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.10"]}, {"sizeBytes": 252162196, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:1c2eab51d63b055cbb8e692932aaefe6a789cf97339135ad603f44d1e901bb7c", "registry.access.redhat.com/openshift3/registry-console:v3.7"]}, {"sizeBytes": 231249835, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:9c53e026026fc4134fbc73dc7cbc9835bfc9c6848da694c9d29534449066b653", "registry.access.redhat.com/openshift3/registry-console:v3.9"]}, {"sizeBytes": 230670018, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:eeb0bee077dc8c6d6552562431bd8e917cb9b9984455a0c0a98c8f20a4ef1bb4", "registry.access.redhat.com/openshift3/registry-console:v3.10"]}, {"sizeBytes": 227525342, "names": ["registry.access.redhat.com/openshift3/logging-curator@sha256:fad9394a52bc33f153588ce8040fca3e3284a620f5e4000b4af68167c4874644", "registry.access.redhat.com/openshift3/logging-curator:v3.7"]}, {"sizeBytes": 223765764, "names": ["registry.access.redhat.com/openshift3/logging-auth-proxy@sha256:ad1e43e76f02ddd3fc2e40592d6f6fa57cffeae4b5cc4138707bb6505e056b62", "registry.access.redhat.com/openshift3/logging-auth-proxy:v3.7"]}, {"sizeBytes": 222046071, "names": ["registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34"]}, {"sizeBytes": 214236553, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34"]}, {"sizeBytes": 214175104, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41"]}, {"sizeBytes": 208859100, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23"]}, {"sizeBytes": 113556326, "names": ["docker-registry.default.svc:5000/sp-netbox-dev/nginx@sha256:4201e04854bb126f72c049441c2c9d6ebf53bc0f5468c933292b3549016faace", "docker-registry.default.svc:5000/sp-netbox-prod/nginx@sha256:4201e04854bb126f72c049441c2c9d6ebf53bc0f5468c933292b3549016faace"]}, {"sizeBytes": 113556326, "names": ["docker-registry.default.svc:5000/scanplus-netbox/nginx@sha256:576f076a0f0f98630d1e8f38aaf8112f1cf7d2cb11829103e35d1d2512dfe86a"]}, {"sizeBytes": 54277621, "names": ["docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-11-07T16:04:49Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:50:28Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-07T16:04:49Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:50:28Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-11-07T16:04:49Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:50:28Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:50:28Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:50:28Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T21:18:59Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:50:28Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-infra01.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-infra01.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "infra", "kubernetes.io/hostname": "sp-os-infra01.os.ad.scanplus.de", "node-role.kubernetes.io/infra": "true", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93870978", "creationTimestamp": "2018-01-31T13:07:23Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "8e981b702db3988aa35f763d71f2112b"}, "selfLink": "/api/v1/nodes/sp-os-infra01.os.ad.scanplus.de", "uid": "ad32bbb1-0687-11e8-8e46-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (36 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-infra01.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-infra01.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "8e981b702db3988aa35f763d71f2112b", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-01-31T13:07:23Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-infra01.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/infra": "true", "region": "infra", "update.group": "odd", "zone": "RZ-LM07" }, "name": "sp-os-infra01.os.ad.scanplus.de", "resourceVersion": "93870978", "selfLink": "/api/v1/nodes/sp-os-infra01.os.ad.scanplus.de", "uid": "ad32bbb1-0687-11e8-8e46-005056aa3492" }, "spec": { "externalID": "sp-os-infra01.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.241", "type": "InternalIP" }, { "address": "sp-os-infra01.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "5825532Ki", "pods": "250" }, "capacity": { "cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "5927932Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:50:28Z", "lastTransitionTime": "2018-11-07T16:04:49Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:50:28Z", "lastTransitionTime": "2018-11-07T16:04:49Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:28Z", "lastTransitionTime": "2018-11-07T16:04:49Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:28Z", "lastTransitionTime": "2019-01-09T14:50:28Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:50:28Z", "lastTransitionTime": "2018-09-13T21:18:59Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "docker.io/openshift/origin-metrics-deployer@sha256:714ac774dce21991a55746f542eb2eb635eee568c14fe7f7e2e6fcd5653e3bd6", "docker.io/openshift/origin-metrics-deployer:latest" ], "sizeBytes": 1435113523 }, { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:e8f189677c3608469dd2ef8e0b9c87a7161322a17902f6e2289aa0a77adf8869", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.9.41" ], "sizeBytes": 1253410148 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:363c85bff3a7a9092d5df62ffac5a945d00b3544975631962a9c4adf80f938c3", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.7.23" ], "sizeBytes": 1078227309 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:3e36f0dc9e6c43b5e20e347c0e0cb590f263bc1ef7f925b7590d26a358f7f41e", "registry.access.redhat.com/openshift3/ose-deployer:v3.7.23" ], "sizeBytes": 1059096712 }, { "names": [ "registry.access.redhat.com/openshift3/ose@sha256:4c6d10c92d69d8445d9ede7c87ef1bb28c9e473d8624620e701e0a80d6091e92", "registry.access.redhat.com/openshift3/ose@sha256:a8652472480ccc592e774230e1b5e4dfaea3b330bee1ece452914c1830361b06", "registry.access.redhat.com/openshift3/ose:v3.7", "registry.access.redhat.com/openshift3/ose:v3.7.23" ], "sizeBytes": 1059094256 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:f09448e7c03254b309a56ac24f1194667d17d64699da144639ccbebef7301b45", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.10" ], "sizeBytes": 807879920 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10", "registry.access.redhat.com/openshift3/ose-deployer:v3.10.34" ], "sizeBytes": 788614541 }, { "names": [ "registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10" ], "sizeBytes": 788612067 }, { "names": [ "registry.access.redhat.com/openshift3/logging-kibana@sha256:4c00973b15883be9a95ee9fcc0412c3ccacd19e49681790dc6f592abd1b9889a", "registry.access.redhat.com/openshift3/logging-kibana:v3.7" ], "sizeBytes": 674269936 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-dev/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-qa/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8" ], "sizeBytes": 538578734 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:c098692334e5da4fa911b8cce249a932baf3cb4bb02616473e17314fcdf36e91" ], "sizeBytes": 538578478 }, { "names": [ "docker-registry.default.svc:5000/scanplus-netbox/netbox@sha256:676e24d739f2e5d04ab551b29f0511475f77057dbf03f67089d80b912f1ef306", "docker-registry.default.svc:5000/sp-netbox-dev/netbox@sha256:676e24d739f2e5d04ab551b29f0511475f77057dbf03f67089d80b912f1ef306" ], "sizeBytes": 506897040 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:ca4bdf78afcf2cfce000e77e4e5173245d3222b20bfb481dbebc9d8141ad454f", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.7.23" ], "sizeBytes": 459352008 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:83348fb254e4de4783ca482aae13af27200aef08d1434515da24dbb8eb4b1f1b", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.9.41" ], "sizeBytes": 435574212 }, { "names": [ "registry.access.redhat.com/rhscl/mysql-57-rhel7@sha256:803edcbd9fda30de37a23201a3c43b7994628f946fff4c055c1c43cd22a9a4ed" ], "sizeBytes": 419508201 }, { "names": [ "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover@sha256:30728ae0d140912f0a68c39e74f695c333f9c6cd7d746639ccc9ca7ca8d63959", "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:v3.7.23" ], "sizeBytes": 385380226 }, { "names": [ "registry.access.redhat.com/rhscl/postgresql-95-rhel7@sha256:d8b932fab1c9eb48a8a2f2fd94caf6dcff18a78413f621c92d280b158754d7ed" ], "sizeBytes": 311278309 }, { "names": [ "registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:0ebba15f6587d5270fc5fb18a7c3cdf7fb27c7e941e18564078811c1326b2a9b", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.9.41" ], "sizeBytes": 299475138 }, { "names": [ "docker.io/postgres@sha256:df5b5545e937ab152f2cf401fccb515d49363dfce1333c4b8b2580b6c0bbc207" ], "sizeBytes": 286901023 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34" ], "sizeBytes": 286681664 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7" ], "sizeBytes": 286138919 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:3a723441d5d82af63147027dd4d89d1b67fcb60bd1bc7c9bb55f4c5b8d1bc204", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.10" ], "sizeBytes": 283460958 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:1c2eab51d63b055cbb8e692932aaefe6a789cf97339135ad603f44d1e901bb7c", "registry.access.redhat.com/openshift3/registry-console:v3.7" ], "sizeBytes": 252162196 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:9c53e026026fc4134fbc73dc7cbc9835bfc9c6848da694c9d29534449066b653", "registry.access.redhat.com/openshift3/registry-console:v3.9" ], "sizeBytes": 231249835 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:eeb0bee077dc8c6d6552562431bd8e917cb9b9984455a0c0a98c8f20a4ef1bb4", "registry.access.redhat.com/openshift3/registry-console:v3.10" ], "sizeBytes": 230670018 }, { "names": [ "registry.access.redhat.com/openshift3/logging-curator@sha256:fad9394a52bc33f153588ce8040fca3e3284a620f5e4000b4af68167c4874644", "registry.access.redhat.com/openshift3/logging-curator:v3.7" ], "sizeBytes": 227525342 }, { "names": [ "registry.access.redhat.com/openshift3/logging-auth-proxy@sha256:ad1e43e76f02ddd3fc2e40592d6f6fa57cffeae4b5cc4138707bb6505e056b62", "registry.access.redhat.com/openshift3/logging-auth-proxy:v3.7" ], "sizeBytes": 223765764 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34" ], "sizeBytes": 222046071 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34" ], "sizeBytes": 214236553 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41" ], "sizeBytes": 214175104 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23" ], "sizeBytes": 208859100 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-dev/nginx@sha256:4201e04854bb126f72c049441c2c9d6ebf53bc0f5468c933292b3549016faace", "docker-registry.default.svc:5000/sp-netbox-prod/nginx@sha256:4201e04854bb126f72c049441c2c9d6ebf53bc0f5468c933292b3549016faace" ], "sizeBytes": 113556326 }, { "names": [ "docker-registry.default.svc:5000/scanplus-netbox/nginx@sha256:576f076a0f0f98630d1e8f38aaf8112f1cf7d2cb11829103e35d1d2512dfe86a" ], "sizeBytes": 113556326 }, { "names": [ "docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590" ], "sizeBytes": 54277621 } ], "nodeInfo": { "architecture": "amd64", "bootID": "e8fd93e6-6af9-4cc7-984f-3312104dea6a", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422A43A6-9C35-EF0E-B5B5-5B561C56E0A1" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-infra01.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-infra01.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "2", "memory": "5927932Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.241"}, {"type": "Hostname", "address": "sp-os-infra01.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "e8fd93e6-6af9-4cc7-984f-3312104dea6a", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422A43A6-9C35-EF0E-B5B5-5B561C56E0A1", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "2", "memory": "5825532Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1435113523, "names": ["docker.io/openshift/origin-metrics-deployer@sha256:714ac774dce21991a55746f542eb2eb635eee568c14fe7f7e2e6fcd5653e3bd6", "docker.io/openshift/origin-metrics-deployer:latest"]}, {"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1253410148, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:e8f189677c3608469dd2ef8e0b9c87a7161322a17902f6e2289aa0a77adf8869", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.9.41"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 1078227309, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:363c85bff3a7a9092d5df62ffac5a945d00b3544975631962a9c4adf80f938c3", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.7.23"]}, {"sizeBytes": 1059096712, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:3e36f0dc9e6c43b5e20e347c0e0cb590f263bc1ef7f925b7590d26a358f7f41e", "registry.access.redhat.com/openshift3/ose-deployer:v3.7.23"]}, {"sizeBytes": 1059094256, "names": ["registry.access.redhat.com/openshift3/ose@sha256:4c6d10c92d69d8445d9ede7c87ef1bb28c9e473d8624620e701e0a80d6091e92", "registry.access.redhat.com/openshift3/ose@sha256:a8652472480ccc592e774230e1b5e4dfaea3b330bee1ece452914c1830361b06", "registry.access.redhat.com/openshift3/ose:v3.7", "registry.access.redhat.com/openshift3/ose:v3.7.23"]}, {"sizeBytes": 807879920, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:f09448e7c03254b309a56ac24f1194667d17d64699da144639ccbebef7301b45", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.10"]}, {"sizeBytes": 788614541, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10", "registry.access.redhat.com/openshift3/ose-deployer:v3.10.34"]}, {"sizeBytes": 788612067, "names": ["registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10"]}, {"sizeBytes": 674269936, "names": ["registry.access.redhat.com/openshift3/logging-kibana@sha256:4c00973b15883be9a95ee9fcc0412c3ccacd19e49681790dc6f592abd1b9889a", "registry.access.redhat.com/openshift3/logging-kibana:v3.7"]}, {"sizeBytes": 538578734, "names": ["docker-registry.default.svc:5000/sp-netbox-dev/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-qa/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8"]}, {"sizeBytes": 538578478, "names": ["docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:c098692334e5da4fa911b8cce249a932baf3cb4bb02616473e17314fcdf36e91"]}, {"sizeBytes": 506897040, "names": ["docker-registry.default.svc:5000/scanplus-netbox/netbox@sha256:676e24d739f2e5d04ab551b29f0511475f77057dbf03f67089d80b912f1ef306", "docker-registry.default.svc:5000/sp-netbox-dev/netbox@sha256:676e24d739f2e5d04ab551b29f0511475f77057dbf03f67089d80b912f1ef306"]}, {"sizeBytes": 459352008, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:ca4bdf78afcf2cfce000e77e4e5173245d3222b20bfb481dbebc9d8141ad454f", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.7.23"]}, {"sizeBytes": 435574212, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:83348fb254e4de4783ca482aae13af27200aef08d1434515da24dbb8eb4b1f1b", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.9.41"]}, {"sizeBytes": 419508201, "names": ["registry.access.redhat.com/rhscl/mysql-57-rhel7@sha256:803edcbd9fda30de37a23201a3c43b7994628f946fff4c055c1c43cd22a9a4ed"]}, {"sizeBytes": 385380226, "names": ["registry.access.redhat.com/openshift3/ose-keepalived-ipfailover@sha256:30728ae0d140912f0a68c39e74f695c333f9c6cd7d746639ccc9ca7ca8d63959", "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:v3.7.23"]}, {"sizeBytes": 311278309, "names": ["registry.access.redhat.com/rhscl/postgresql-95-rhel7@sha256:d8b932fab1c9eb48a8a2f2fd94caf6dcff18a78413f621c92d280b158754d7ed"]}, {"sizeBytes": 299475138, "names": ["registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:0ebba15f6587d5270fc5fb18a7c3cdf7fb27c7e941e18564078811c1326b2a9b", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.9.41"]}, {"sizeBytes": 286901023, "names": ["docker.io/postgres@sha256:df5b5545e937ab152f2cf401fccb515d49363dfce1333c4b8b2580b6c0bbc207"]}, {"sizeBytes": 286681664, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34"]}, {"sizeBytes": 286138919, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7"]}, {"sizeBytes": 283460958, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:3a723441d5d82af63147027dd4d89d1b67fcb60bd1bc7c9bb55f4c5b8d1bc204", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.10"]}, {"sizeBytes": 252162196, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:1c2eab51d63b055cbb8e692932aaefe6a789cf97339135ad603f44d1e901bb7c", "registry.access.redhat.com/openshift3/registry-console:v3.7"]}, {"sizeBytes": 231249835, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:9c53e026026fc4134fbc73dc7cbc9835bfc9c6848da694c9d29534449066b653", "registry.access.redhat.com/openshift3/registry-console:v3.9"]}, {"sizeBytes": 230670018, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:eeb0bee077dc8c6d6552562431bd8e917cb9b9984455a0c0a98c8f20a4ef1bb4", "registry.access.redhat.com/openshift3/registry-console:v3.10"]}, {"sizeBytes": 227525342, "names": ["registry.access.redhat.com/openshift3/logging-curator@sha256:fad9394a52bc33f153588ce8040fca3e3284a620f5e4000b4af68167c4874644", "registry.access.redhat.com/openshift3/logging-curator:v3.7"]}, {"sizeBytes": 223765764, "names": ["registry.access.redhat.com/openshift3/logging-auth-proxy@sha256:ad1e43e76f02ddd3fc2e40592d6f6fa57cffeae4b5cc4138707bb6505e056b62", "registry.access.redhat.com/openshift3/logging-auth-proxy:v3.7"]}, {"sizeBytes": 222046071, "names": ["registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34"]}, {"sizeBytes": 214236553, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34"]}, {"sizeBytes": 214175104, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41"]}, {"sizeBytes": 208859100, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23"]}, {"sizeBytes": 113556326, "names": ["docker-registry.default.svc:5000/sp-netbox-dev/nginx@sha256:4201e04854bb126f72c049441c2c9d6ebf53bc0f5468c933292b3549016faace", "docker-registry.default.svc:5000/sp-netbox-prod/nginx@sha256:4201e04854bb126f72c049441c2c9d6ebf53bc0f5468c933292b3549016faace"]}, {"sizeBytes": 113556326, "names": ["docker-registry.default.svc:5000/scanplus-netbox/nginx@sha256:576f076a0f0f98630d1e8f38aaf8112f1cf7d2cb11829103e35d1d2512dfe86a"]}, {"sizeBytes": 54277621, "names": ["docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-11-07T16:04:49Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:50:28Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-07T16:04:49Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:50:28Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-11-07T16:04:49Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:50:28Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:50:28Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:50:28Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T21:18:59Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:50:28Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-infra01.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-infra01.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "infra", "kubernetes.io/hostname": "sp-os-infra01.os.ad.scanplus.de", "node-role.kubernetes.io/infra": "true", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93870978", "creationTimestamp": "2018-01-31T13:07:23Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "8e981b702db3988aa35f763d71f2112b"}, "selfLink": "/api/v1/nodes/sp-os-infra01.os.ad.scanplus.de", "uid": "ad32bbb1-0687-11e8-8e46-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (35 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-infra01.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-infra01.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "8e981b702db3988aa35f763d71f2112b", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-01-31T13:07:23Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-infra01.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/infra": "true", "region": "infra", "update.group": "odd", "zone": "RZ-LM07" }, "name": "sp-os-infra01.os.ad.scanplus.de", "resourceVersion": "93870978", "selfLink": "/api/v1/nodes/sp-os-infra01.os.ad.scanplus.de", "uid": "ad32bbb1-0687-11e8-8e46-005056aa3492" }, "spec": { "externalID": "sp-os-infra01.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.241", "type": "InternalIP" }, { "address": "sp-os-infra01.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "5825532Ki", "pods": "250" }, "capacity": { "cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "5927932Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:50:28Z", "lastTransitionTime": "2018-11-07T16:04:49Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:50:28Z", "lastTransitionTime": "2018-11-07T16:04:49Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:28Z", "lastTransitionTime": "2018-11-07T16:04:49Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:28Z", "lastTransitionTime": "2019-01-09T14:50:28Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:50:28Z", "lastTransitionTime": "2018-09-13T21:18:59Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "docker.io/openshift/origin-metrics-deployer@sha256:714ac774dce21991a55746f542eb2eb635eee568c14fe7f7e2e6fcd5653e3bd6", "docker.io/openshift/origin-metrics-deployer:latest" ], "sizeBytes": 1435113523 }, { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:e8f189677c3608469dd2ef8e0b9c87a7161322a17902f6e2289aa0a77adf8869", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.9.41" ], "sizeBytes": 1253410148 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:363c85bff3a7a9092d5df62ffac5a945d00b3544975631962a9c4adf80f938c3", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.7.23" ], "sizeBytes": 1078227309 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:3e36f0dc9e6c43b5e20e347c0e0cb590f263bc1ef7f925b7590d26a358f7f41e", "registry.access.redhat.com/openshift3/ose-deployer:v3.7.23" ], "sizeBytes": 1059096712 }, { "names": [ "registry.access.redhat.com/openshift3/ose@sha256:4c6d10c92d69d8445d9ede7c87ef1bb28c9e473d8624620e701e0a80d6091e92", "registry.access.redhat.com/openshift3/ose@sha256:a8652472480ccc592e774230e1b5e4dfaea3b330bee1ece452914c1830361b06", "registry.access.redhat.com/openshift3/ose:v3.7", "registry.access.redhat.com/openshift3/ose:v3.7.23" ], "sizeBytes": 1059094256 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:f09448e7c03254b309a56ac24f1194667d17d64699da144639ccbebef7301b45", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.10" ], "sizeBytes": 807879920 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10", "registry.access.redhat.com/openshift3/ose-deployer:v3.10.34" ], "sizeBytes": 788614541 }, { "names": [ "registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10" ], "sizeBytes": 788612067 }, { "names": [ "registry.access.redhat.com/openshift3/logging-kibana@sha256:4c00973b15883be9a95ee9fcc0412c3ccacd19e49681790dc6f592abd1b9889a", "registry.access.redhat.com/openshift3/logging-kibana:v3.7" ], "sizeBytes": 674269936 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-dev/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-qa/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8" ], "sizeBytes": 538578734 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:c098692334e5da4fa911b8cce249a932baf3cb4bb02616473e17314fcdf36e91" ], "sizeBytes": 538578478 }, { "names": [ "docker-registry.default.svc:5000/scanplus-netbox/netbox@sha256:676e24d739f2e5d04ab551b29f0511475f77057dbf03f67089d80b912f1ef306", "docker-registry.default.svc:5000/sp-netbox-dev/netbox@sha256:676e24d739f2e5d04ab551b29f0511475f77057dbf03f67089d80b912f1ef306" ], "sizeBytes": 506897040 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:ca4bdf78afcf2cfce000e77e4e5173245d3222b20bfb481dbebc9d8141ad454f", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.7.23" ], "sizeBytes": 459352008 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:83348fb254e4de4783ca482aae13af27200aef08d1434515da24dbb8eb4b1f1b", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.9.41" ], "sizeBytes": 435574212 }, { "names": [ "registry.access.redhat.com/rhscl/mysql-57-rhel7@sha256:803edcbd9fda30de37a23201a3c43b7994628f946fff4c055c1c43cd22a9a4ed" ], "sizeBytes": 419508201 }, { "names": [ "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover@sha256:30728ae0d140912f0a68c39e74f695c333f9c6cd7d746639ccc9ca7ca8d63959", "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:v3.7.23" ], "sizeBytes": 385380226 }, { "names": [ "registry.access.redhat.com/rhscl/postgresql-95-rhel7@sha256:d8b932fab1c9eb48a8a2f2fd94caf6dcff18a78413f621c92d280b158754d7ed" ], "sizeBytes": 311278309 }, { "names": [ "registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:0ebba15f6587d5270fc5fb18a7c3cdf7fb27c7e941e18564078811c1326b2a9b", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.9.41" ], "sizeBytes": 299475138 }, { "names": [ "docker.io/postgres@sha256:df5b5545e937ab152f2cf401fccb515d49363dfce1333c4b8b2580b6c0bbc207" ], "sizeBytes": 286901023 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34" ], "sizeBytes": 286681664 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7" ], "sizeBytes": 286138919 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:3a723441d5d82af63147027dd4d89d1b67fcb60bd1bc7c9bb55f4c5b8d1bc204", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.10" ], "sizeBytes": 283460958 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:1c2eab51d63b055cbb8e692932aaefe6a789cf97339135ad603f44d1e901bb7c", "registry.access.redhat.com/openshift3/registry-console:v3.7" ], "sizeBytes": 252162196 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:9c53e026026fc4134fbc73dc7cbc9835bfc9c6848da694c9d29534449066b653", "registry.access.redhat.com/openshift3/registry-console:v3.9" ], "sizeBytes": 231249835 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:eeb0bee077dc8c6d6552562431bd8e917cb9b9984455a0c0a98c8f20a4ef1bb4", "registry.access.redhat.com/openshift3/registry-console:v3.10" ], "sizeBytes": 230670018 }, { "names": [ "registry.access.redhat.com/openshift3/logging-curator@sha256:fad9394a52bc33f153588ce8040fca3e3284a620f5e4000b4af68167c4874644", "registry.access.redhat.com/openshift3/logging-curator:v3.7" ], "sizeBytes": 227525342 }, { "names": [ "registry.access.redhat.com/openshift3/logging-auth-proxy@sha256:ad1e43e76f02ddd3fc2e40592d6f6fa57cffeae4b5cc4138707bb6505e056b62", "registry.access.redhat.com/openshift3/logging-auth-proxy:v3.7" ], "sizeBytes": 223765764 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34" ], "sizeBytes": 222046071 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34" ], "sizeBytes": 214236553 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41" ], "sizeBytes": 214175104 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23" ], "sizeBytes": 208859100 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-dev/nginx@sha256:4201e04854bb126f72c049441c2c9d6ebf53bc0f5468c933292b3549016faace", "docker-registry.default.svc:5000/sp-netbox-prod/nginx@sha256:4201e04854bb126f72c049441c2c9d6ebf53bc0f5468c933292b3549016faace" ], "sizeBytes": 113556326 }, { "names": [ "docker-registry.default.svc:5000/scanplus-netbox/nginx@sha256:576f076a0f0f98630d1e8f38aaf8112f1cf7d2cb11829103e35d1d2512dfe86a" ], "sizeBytes": 113556326 }, { "names": [ "docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590" ], "sizeBytes": 54277621 } ], "nodeInfo": { "architecture": "amd64", "bootID": "e8fd93e6-6af9-4cc7-984f-3312104dea6a", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422A43A6-9C35-EF0E-B5B5-5B561C56E0A1" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-infra01.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-infra01.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "2", "memory": "5927932Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.241"}, {"type": "Hostname", "address": "sp-os-infra01.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "e8fd93e6-6af9-4cc7-984f-3312104dea6a", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422A43A6-9C35-EF0E-B5B5-5B561C56E0A1", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "2", "memory": "5825532Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1435113523, "names": ["docker.io/openshift/origin-metrics-deployer@sha256:714ac774dce21991a55746f542eb2eb635eee568c14fe7f7e2e6fcd5653e3bd6", "docker.io/openshift/origin-metrics-deployer:latest"]}, {"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1253410148, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:e8f189677c3608469dd2ef8e0b9c87a7161322a17902f6e2289aa0a77adf8869", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.9.41"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 1078227309, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:363c85bff3a7a9092d5df62ffac5a945d00b3544975631962a9c4adf80f938c3", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.7.23"]}, {"sizeBytes": 1059096712, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:3e36f0dc9e6c43b5e20e347c0e0cb590f263bc1ef7f925b7590d26a358f7f41e", "registry.access.redhat.com/openshift3/ose-deployer:v3.7.23"]}, {"sizeBytes": 1059094256, "names": ["registry.access.redhat.com/openshift3/ose@sha256:4c6d10c92d69d8445d9ede7c87ef1bb28c9e473d8624620e701e0a80d6091e92", "registry.access.redhat.com/openshift3/ose@sha256:a8652472480ccc592e774230e1b5e4dfaea3b330bee1ece452914c1830361b06", "registry.access.redhat.com/openshift3/ose:v3.7", "registry.access.redhat.com/openshift3/ose:v3.7.23"]}, {"sizeBytes": 807879920, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:f09448e7c03254b309a56ac24f1194667d17d64699da144639ccbebef7301b45", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.10"]}, {"sizeBytes": 788614541, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10", "registry.access.redhat.com/openshift3/ose-deployer:v3.10.34"]}, {"sizeBytes": 788612067, "names": ["registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10"]}, {"sizeBytes": 674269936, "names": ["registry.access.redhat.com/openshift3/logging-kibana@sha256:4c00973b15883be9a95ee9fcc0412c3ccacd19e49681790dc6f592abd1b9889a", "registry.access.redhat.com/openshift3/logging-kibana:v3.7"]}, {"sizeBytes": 538578734, "names": ["docker-registry.default.svc:5000/sp-netbox-dev/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-qa/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8"]}, {"sizeBytes": 538578478, "names": ["docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:c098692334e5da4fa911b8cce249a932baf3cb4bb02616473e17314fcdf36e91"]}, {"sizeBytes": 506897040, "names": ["docker-registry.default.svc:5000/scanplus-netbox/netbox@sha256:676e24d739f2e5d04ab551b29f0511475f77057dbf03f67089d80b912f1ef306", "docker-registry.default.svc:5000/sp-netbox-dev/netbox@sha256:676e24d739f2e5d04ab551b29f0511475f77057dbf03f67089d80b912f1ef306"]}, {"sizeBytes": 459352008, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:ca4bdf78afcf2cfce000e77e4e5173245d3222b20bfb481dbebc9d8141ad454f", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.7.23"]}, {"sizeBytes": 435574212, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:83348fb254e4de4783ca482aae13af27200aef08d1434515da24dbb8eb4b1f1b", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.9.41"]}, {"sizeBytes": 419508201, "names": ["registry.access.redhat.com/rhscl/mysql-57-rhel7@sha256:803edcbd9fda30de37a23201a3c43b7994628f946fff4c055c1c43cd22a9a4ed"]}, {"sizeBytes": 385380226, "names": ["registry.access.redhat.com/openshift3/ose-keepalived-ipfailover@sha256:30728ae0d140912f0a68c39e74f695c333f9c6cd7d746639ccc9ca7ca8d63959", "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:v3.7.23"]}, {"sizeBytes": 311278309, "names": ["registry.access.redhat.com/rhscl/postgresql-95-rhel7@sha256:d8b932fab1c9eb48a8a2f2fd94caf6dcff18a78413f621c92d280b158754d7ed"]}, {"sizeBytes": 299475138, "names": ["registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:0ebba15f6587d5270fc5fb18a7c3cdf7fb27c7e941e18564078811c1326b2a9b", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.9.41"]}, {"sizeBytes": 286901023, "names": ["docker.io/postgres@sha256:df5b5545e937ab152f2cf401fccb515d49363dfce1333c4b8b2580b6c0bbc207"]}, {"sizeBytes": 286681664, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34"]}, {"sizeBytes": 286138919, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7"]}, {"sizeBytes": 283460958, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:3a723441d5d82af63147027dd4d89d1b67fcb60bd1bc7c9bb55f4c5b8d1bc204", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.10"]}, {"sizeBytes": 252162196, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:1c2eab51d63b055cbb8e692932aaefe6a789cf97339135ad603f44d1e901bb7c", "registry.access.redhat.com/openshift3/registry-console:v3.7"]}, {"sizeBytes": 231249835, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:9c53e026026fc4134fbc73dc7cbc9835bfc9c6848da694c9d29534449066b653", "registry.access.redhat.com/openshift3/registry-console:v3.9"]}, {"sizeBytes": 230670018, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:eeb0bee077dc8c6d6552562431bd8e917cb9b9984455a0c0a98c8f20a4ef1bb4", "registry.access.redhat.com/openshift3/registry-console:v3.10"]}, {"sizeBytes": 227525342, "names": ["registry.access.redhat.com/openshift3/logging-curator@sha256:fad9394a52bc33f153588ce8040fca3e3284a620f5e4000b4af68167c4874644", "registry.access.redhat.com/openshift3/logging-curator:v3.7"]}, {"sizeBytes": 223765764, "names": ["registry.access.redhat.com/openshift3/logging-auth-proxy@sha256:ad1e43e76f02ddd3fc2e40592d6f6fa57cffeae4b5cc4138707bb6505e056b62", "registry.access.redhat.com/openshift3/logging-auth-proxy:v3.7"]}, {"sizeBytes": 222046071, "names": ["registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34"]}, {"sizeBytes": 214236553, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34"]}, {"sizeBytes": 214175104, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41"]}, {"sizeBytes": 208859100, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23"]}, {"sizeBytes": 113556326, "names": ["docker-registry.default.svc:5000/sp-netbox-dev/nginx@sha256:4201e04854bb126f72c049441c2c9d6ebf53bc0f5468c933292b3549016faace", "docker-registry.default.svc:5000/sp-netbox-prod/nginx@sha256:4201e04854bb126f72c049441c2c9d6ebf53bc0f5468c933292b3549016faace"]}, {"sizeBytes": 113556326, "names": ["docker-registry.default.svc:5000/scanplus-netbox/nginx@sha256:576f076a0f0f98630d1e8f38aaf8112f1cf7d2cb11829103e35d1d2512dfe86a"]}, {"sizeBytes": 54277621, "names": ["docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-11-07T16:04:49Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:50:38Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-07T16:04:49Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:50:38Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-11-07T16:04:49Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:50:38Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "True", "lastTransitionTime": "2019-01-09T14:50:38Z", "reason": "KubeletReady", "lastHeartbeatTime": "2019-01-09T14:50:38Z", "message": "kubelet is posting ready status", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T21:18:59Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:50:38Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-infra01.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-infra01.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "infra", "kubernetes.io/hostname": "sp-os-infra01.os.ad.scanplus.de", "node-role.kubernetes.io/infra": "true", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871031", "creationTimestamp": "2018-01-31T13:07:23Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "8e981b702db3988aa35f763d71f2112b"}, "selfLink": "/api/v1/nodes/sp-os-infra01.os.ad.scanplus.de", "uid": "ad32bbb1-0687-11e8-8e46-005056aa3492"}}]}}\n', '') ok: [sp-os-infra01.os.ad.scanplus.de -> sp-os-master01.os.ad.scanplus.de] => { "attempts": 3, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-infra01.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-infra01.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "8e981b702db3988aa35f763d71f2112b", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-01-31T13:07:23Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-infra01.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/infra": "true", "region": "infra", "update.group": "odd", "zone": "RZ-LM07" }, "name": "sp-os-infra01.os.ad.scanplus.de", "resourceVersion": "93871031", "selfLink": "/api/v1/nodes/sp-os-infra01.os.ad.scanplus.de", "uid": "ad32bbb1-0687-11e8-8e46-005056aa3492" }, "spec": { "externalID": "sp-os-infra01.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.241", "type": "InternalIP" }, { "address": "sp-os-infra01.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "5825532Ki", "pods": "250" }, "capacity": { "cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "5927932Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:50:38Z", "lastTransitionTime": "2018-11-07T16:04:49Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:50:38Z", "lastTransitionTime": "2018-11-07T16:04:49Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:38Z", "lastTransitionTime": "2018-11-07T16:04:49Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:38Z", "lastTransitionTime": "2019-01-09T14:50:38Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:50:38Z", "lastTransitionTime": "2018-09-13T21:18:59Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "docker.io/openshift/origin-metrics-deployer@sha256:714ac774dce21991a55746f542eb2eb635eee568c14fe7f7e2e6fcd5653e3bd6", "docker.io/openshift/origin-metrics-deployer:latest" ], "sizeBytes": 1435113523 }, { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:e8f189677c3608469dd2ef8e0b9c87a7161322a17902f6e2289aa0a77adf8869", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.9.41" ], "sizeBytes": 1253410148 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:363c85bff3a7a9092d5df62ffac5a945d00b3544975631962a9c4adf80f938c3", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.7.23" ], "sizeBytes": 1078227309 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:3e36f0dc9e6c43b5e20e347c0e0cb590f263bc1ef7f925b7590d26a358f7f41e", "registry.access.redhat.com/openshift3/ose-deployer:v3.7.23" ], "sizeBytes": 1059096712 }, { "names": [ "registry.access.redhat.com/openshift3/ose@sha256:4c6d10c92d69d8445d9ede7c87ef1bb28c9e473d8624620e701e0a80d6091e92", "registry.access.redhat.com/openshift3/ose@sha256:a8652472480ccc592e774230e1b5e4dfaea3b330bee1ece452914c1830361b06", "registry.access.redhat.com/openshift3/ose:v3.7", "registry.access.redhat.com/openshift3/ose:v3.7.23" ], "sizeBytes": 1059094256 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:f09448e7c03254b309a56ac24f1194667d17d64699da144639ccbebef7301b45", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.10" ], "sizeBytes": 807879920 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10", "registry.access.redhat.com/openshift3/ose-deployer:v3.10.34" ], "sizeBytes": 788614541 }, { "names": [ "registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10" ], "sizeBytes": 788612067 }, { "names": [ "registry.access.redhat.com/openshift3/logging-kibana@sha256:4c00973b15883be9a95ee9fcc0412c3ccacd19e49681790dc6f592abd1b9889a", "registry.access.redhat.com/openshift3/logging-kibana:v3.7" ], "sizeBytes": 674269936 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-dev/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-qa/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8" ], "sizeBytes": 538578734 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:c098692334e5da4fa911b8cce249a932baf3cb4bb02616473e17314fcdf36e91" ], "sizeBytes": 538578478 }, { "names": [ "docker-registry.default.svc:5000/scanplus-netbox/netbox@sha256:676e24d739f2e5d04ab551b29f0511475f77057dbf03f67089d80b912f1ef306", "docker-registry.default.svc:5000/sp-netbox-dev/netbox@sha256:676e24d739f2e5d04ab551b29f0511475f77057dbf03f67089d80b912f1ef306" ], "sizeBytes": 506897040 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:ca4bdf78afcf2cfce000e77e4e5173245d3222b20bfb481dbebc9d8141ad454f", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.7.23" ], "sizeBytes": 459352008 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:83348fb254e4de4783ca482aae13af27200aef08d1434515da24dbb8eb4b1f1b", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.9.41" ], "sizeBytes": 435574212 }, { "names": [ "registry.access.redhat.com/rhscl/mysql-57-rhel7@sha256:803edcbd9fda30de37a23201a3c43b7994628f946fff4c055c1c43cd22a9a4ed" ], "sizeBytes": 419508201 }, { "names": [ "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover@sha256:30728ae0d140912f0a68c39e74f695c333f9c6cd7d746639ccc9ca7ca8d63959", "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:v3.7.23" ], "sizeBytes": 385380226 }, { "names": [ "registry.access.redhat.com/rhscl/postgresql-95-rhel7@sha256:d8b932fab1c9eb48a8a2f2fd94caf6dcff18a78413f621c92d280b158754d7ed" ], "sizeBytes": 311278309 }, { "names": [ "registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:0ebba15f6587d5270fc5fb18a7c3cdf7fb27c7e941e18564078811c1326b2a9b", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.9.41" ], "sizeBytes": 299475138 }, { "names": [ "docker.io/postgres@sha256:df5b5545e937ab152f2cf401fccb515d49363dfce1333c4b8b2580b6c0bbc207" ], "sizeBytes": 286901023 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34" ], "sizeBytes": 286681664 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7" ], "sizeBytes": 286138919 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:3a723441d5d82af63147027dd4d89d1b67fcb60bd1bc7c9bb55f4c5b8d1bc204", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.10" ], "sizeBytes": 283460958 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:1c2eab51d63b055cbb8e692932aaefe6a789cf97339135ad603f44d1e901bb7c", "registry.access.redhat.com/openshift3/registry-console:v3.7" ], "sizeBytes": 252162196 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:9c53e026026fc4134fbc73dc7cbc9835bfc9c6848da694c9d29534449066b653", "registry.access.redhat.com/openshift3/registry-console:v3.9" ], "sizeBytes": 231249835 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:eeb0bee077dc8c6d6552562431bd8e917cb9b9984455a0c0a98c8f20a4ef1bb4", "registry.access.redhat.com/openshift3/registry-console:v3.10" ], "sizeBytes": 230670018 }, { "names": [ "registry.access.redhat.com/openshift3/logging-curator@sha256:fad9394a52bc33f153588ce8040fca3e3284a620f5e4000b4af68167c4874644", "registry.access.redhat.com/openshift3/logging-curator:v3.7" ], "sizeBytes": 227525342 }, { "names": [ "registry.access.redhat.com/openshift3/logging-auth-proxy@sha256:ad1e43e76f02ddd3fc2e40592d6f6fa57cffeae4b5cc4138707bb6505e056b62", "registry.access.redhat.com/openshift3/logging-auth-proxy:v3.7" ], "sizeBytes": 223765764 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34" ], "sizeBytes": 222046071 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34" ], "sizeBytes": 214236553 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41" ], "sizeBytes": 214175104 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23" ], "sizeBytes": 208859100 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-dev/nginx@sha256:4201e04854bb126f72c049441c2c9d6ebf53bc0f5468c933292b3549016faace", "docker-registry.default.svc:5000/sp-netbox-prod/nginx@sha256:4201e04854bb126f72c049441c2c9d6ebf53bc0f5468c933292b3549016faace" ], "sizeBytes": 113556326 }, { "names": [ "docker-registry.default.svc:5000/scanplus-netbox/nginx@sha256:576f076a0f0f98630d1e8f38aaf8112f1cf7d2cb11829103e35d1d2512dfe86a" ], "sizeBytes": 113556326 }, { "names": [ "docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590" ], "sizeBytes": 54277621 } ], "nodeInfo": { "architecture": "amd64", "bootID": "e8fd93e6-6af9-4cc7-984f-3312104dea6a", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422A43A6-9C35-EF0E-B5B5-5B561C56E0A1" } } } ], "returncode": 0 }, "state": "list" } META: ran handlers META: ran handlers PLAY [Restart nodes] ******************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [restart node] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:32 Wednesday 09 January 2019 15:50:39 +0100 (0:00:11.266) 0:11:13.802 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-infra02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TimeoutStopUSec": "1min 30s", "ControlGroup": "/system.slice/atomic-openshift-node.service", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ActiveExitTimestamp": "Wed 2019-01-09 14:51:11 CET", "ExecMainCode": "0", "UnitFileState": "enabled", "ExecMainPID": "107842", "LimitSIGPENDING": "23059", "FileDescriptorStoreMax": "0", "LoadState": "loaded", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "10162041287687", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "10162041287730", "StandardError": "inherit", "AssertTimestamp": "Wed 2019-01-09 14:51:11 CET", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "InactiveExitTimestamp": "Wed 2019-01-09 14:51:11 CET", "WatchdogTimestamp": "Wed 2019-01-09 14:51:12 CET", "NoNewPrivileges": "no", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "Before": "shutdown.target multi-user.target", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "10162040934749", "SendSIGHUP": "no", "TimeoutStartUSec": "5min", "Type": "notify", "SyslogPriority": "30", "SameProcessGroup": "no", "MountFlags": "0", "LimitNPROC": "23059", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "ExecMainStartTimestamp": "Wed 2019-01-09 14:51:11 CET", "SyslogIdentifier": "atomic-openshift-node", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "-999", "Documentation": "https://github.com/openshift/origin", "StartLimitBurst": "5", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "SecureBits": "0", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "running", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestampMonotonic": "10162040933524", "MainPID": "107842", "StartupBlockIOWeight": "18446744073709551615", "ActiveEnterTimestamp": "Wed 2019-01-09 14:51:12 CET", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "active", "Nice": "0", "LimitDATA": "18446744073709551615", "UnitFilePreset": "disabled", "MemoryCurrent": "70578176", "LimitRTTIME": "18446744073709551615", "WantedBy": "multi-user.target", "TasksCurrent": "18446744073709551615", "RestartUSec": "5s", "ConditionTimestamp": "Wed 2019-01-09 14:51:11 CET", "CPUAccounting": "yes", "RemainAfterExit": "no", "RequiresMountsFor": "/var/lib/origin", "PrivateNetwork": "no", "Restart": "always", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "RefuseManualStop": "no", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "InactiveEnterTimestamp": "Wed 2019-01-09 14:51:11 CET", "StandardInput": "null", "AssertTimestampMonotonic": "10162040933525", "DefaultDependencies": "yes", "Requires": "basic.target var.mount -.mount", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "Slice": "system.slice", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "main", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "PrivateTmp": "no", "OnFailureJobMode": "replace", "AssertResult": "yes", "LimitLOCKS": "18446744073709551615", "ExecMainStartTimestampMonotonic": "10162040934706", "AllowIsolate": "no", "Wants": "docker.service system.slice dnsmasq.service", "After": "docker.service -.mount systemd-journald.socket dnsmasq.service basic.target chronyd.service var.mount ntpd.service system.slice", "FailureAction": "none", "CanIsolate": "no", "Conflicts": "shutdown.target", "StandardOutput": "journal", "WorkingDirectory": "/var/lib/origin", "InactiveEnterTimestampMonotonic": "10162040930206", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "Transient": "no", "IOScheduling": "0", "Description": "OpenShift Node", "ActiveExitTimestampMonotonic": "10162040916271", "CanReload": "no", "ControlPID": "0", "LimitNICE": "0", "BlockIOWeight": "18446744073709551615", "Names": "atomic-openshift-node.service", "ProtectSystem": "no", "PrivateDevices": "no", "Id": "atomic-openshift-node.service"}, "invocation": {"module_args": {"daemon-reload": true, "force": null, "name": "atomic-openshift-node", "enabled": null, "daemon_reload": true, "state": "restarted", "no_block": false, "user": false, "masked": null}}, "state": "started", "changed": true, "name": "atomic-openshift-node"}\n', '') changed: [sp-os-infra02.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "daemon-reload": true, "daemon_reload": true, "enabled": null, "force": null, "masked": null, "name": "atomic-openshift-node", "no_block": false, "state": "restarted", "user": false } }, "name": "atomic-openshift-node", "state": "started", "status": { "ActiveEnterTimestamp": "Wed 2019-01-09 14:51:12 CET", "ActiveEnterTimestampMonotonic": "10162041287730", "ActiveExitTimestamp": "Wed 2019-01-09 14:51:11 CET", "ActiveExitTimestampMonotonic": "10162040916271", "ActiveState": "active", "After": "docker.service -.mount systemd-journald.socket dnsmasq.service basic.target chronyd.service var.mount ntpd.service system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-01-09 14:51:11 CET", "AssertTimestampMonotonic": "10162040933525", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-01-09 14:51:11 CET", "ConditionTimestampMonotonic": "10162040933524", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/atomic-openshift-node.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "OpenShift Node", "DevicePolicy": "auto", "Documentation": "https://github.com/openshift/origin", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "107842", "ExecMainStartTimestamp": "Wed 2019-01-09 14:51:11 CET", "ExecMainStartTimestampMonotonic": "10162040934706", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "atomic-openshift-node.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-01-09 14:51:11 CET", "InactiveEnterTimestampMonotonic": "10162040930206", "InactiveExitTimestamp": "Wed 2019-01-09 14:51:11 CET", "InactiveExitTimestampMonotonic": "10162040934749", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "23059", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "23059", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "107842", "MemoryAccounting": "yes", "MemoryCurrent": "70578176", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "atomic-openshift-node.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "-999", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "basic.target var.mount -.mount", "RequiresMountsFor": "/var/lib/origin", "Restart": "always", "RestartUSec": "5s", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogIdentifier": "atomic-openshift-node", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker.service system.slice dnsmasq.service", "WatchdogTimestamp": "Wed 2019-01-09 14:51:12 CET", "WatchdogTimestampMonotonic": "10162041287687", "WatchdogUSec": "0", "WorkingDirectory": "/var/lib/origin" } } TASK [Wait for node to be ready] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:38 Wednesday 09 January 2019 15:50:40 +0100 (0:00:00.873) 0:11:14.675 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-infra02.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-infra02.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "2", "memory": "5927924Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.242"}, {"type": "Hostname", "address": "sp-os-infra02.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "c13372fd-fff8-45f8-8e8d-eeb27a3a5575", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422A8563-28D9-EAC6-14A3-7E67E42D0ECC", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "2", "memory": "5825524Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1253410148, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:e8f189677c3608469dd2ef8e0b9c87a7161322a17902f6e2289aa0a77adf8869", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.9.41"]}, {"sizeBytes": 1232772517, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:f002aad12ebbbe86fc846736c22432bf51fa78efc39514601e550974be8d89f4", "registry.access.redhat.com/openshift3/ose-deployer:v3.9.41"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 1078227309, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:363c85bff3a7a9092d5df62ffac5a945d00b3544975631962a9c4adf80f938c3", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.7.23"]}, {"sizeBytes": 1059096712, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:3e36f0dc9e6c43b5e20e347c0e0cb590f263bc1ef7f925b7590d26a358f7f41e", "registry.access.redhat.com/openshift3/ose-deployer:v3.7.23"]}, {"sizeBytes": 1059094256, "names": ["registry.access.redhat.com/openshift3/ose@sha256:4c6d10c92d69d8445d9ede7c87ef1bb28c9e473d8624620e701e0a80d6091e92", "registry.access.redhat.com/openshift3/ose@sha256:a8652472480ccc592e774230e1b5e4dfaea3b330bee1ece452914c1830361b06", "registry.access.redhat.com/openshift3/ose:v3.7", "registry.access.redhat.com/openshift3/ose:v3.7.23"]}, {"sizeBytes": 807879920, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:f09448e7c03254b309a56ac24f1194667d17d64699da144639ccbebef7301b45", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.10"]}, {"sizeBytes": 788614541, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10", "registry.access.redhat.com/openshift3/ose-deployer:v3.10.34"]}, {"sizeBytes": 788612067, "names": ["registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10"]}, {"sizeBytes": 674269936, "names": ["registry.access.redhat.com/openshift3/logging-kibana@sha256:4c00973b15883be9a95ee9fcc0412c3ccacd19e49681790dc6f592abd1b9889a", "registry.access.redhat.com/openshift3/logging-kibana:v3.7"]}, {"sizeBytes": 548913095, "names": ["registry.access.redhat.com/openshift3/ose-ansible-service-broker@sha256:0735c07fb871a6e03dfc606d7349bf497ddaf6e4f3df2a081052efb1c7ce5f1b", "registry.access.redhat.com/openshift3/ose-ansible-service-broker:v3.9.41"]}, {"sizeBytes": 538578734, "names": ["docker-registry.default.svc:5000/sp-netbox-dev/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8"]}, {"sizeBytes": 538578732, "names": ["docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:2c4eda089188c8f1b58768e24b4a3db182faaba39c30e35664d898d7434ba449"]}, {"sizeBytes": 538578403, "names": ["docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:8d76df83d61b207def5e131c324315957e5561009b833135b34dc79ce917c116"]}, {"sizeBytes": 538578369, "names": ["docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:f4a7abbbb56bc6629d486b86311c6c959c8055fef931b5caa87432e2b0da985a"]}, {"sizeBytes": 459352008, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:ca4bdf78afcf2cfce000e77e4e5173245d3222b20bfb481dbebc9d8141ad454f", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.7.23"]}, {"sizeBytes": 435574212, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:83348fb254e4de4783ca482aae13af27200aef08d1434515da24dbb8eb4b1f1b", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.9.41"]}, {"sizeBytes": 385380226, "names": ["registry.access.redhat.com/openshift3/ose-keepalived-ipfailover@sha256:30728ae0d140912f0a68c39e74f695c333f9c6cd7d746639ccc9ca7ca8d63959", "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:v3.7.23"]}, {"sizeBytes": 299475138, "names": ["registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:0ebba15f6587d5270fc5fb18a7c3cdf7fb27c7e941e18564078811c1326b2a9b", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.9.41"]}, {"sizeBytes": 286681664, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34"]}, {"sizeBytes": 286138919, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7"]}, {"sizeBytes": 283460958, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:3a723441d5d82af63147027dd4d89d1b67fcb60bd1bc7c9bb55f4c5b8d1bc204", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.10"]}, {"sizeBytes": 279296115, "names": ["registry.access.redhat.com/openshift3/prometheus@sha256:818df27c1cee709f6845e7be37ad44a8c8800517a019861397c0b57faa5cbe05", "registry.access.redhat.com/openshift3/prometheus:v3.10.34"]}, {"sizeBytes": 256885327, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:32055a6793fe35313d2fce4b4fbbb9424fc63411f532aef720563d682ac32531", "registry.access.redhat.com/openshift3/registry-console:v3.7"]}, {"sizeBytes": 238450205, "names": ["registry.access.redhat.com/openshift3/oauth-proxy@sha256:7631a6234544686912fb48417f4e7765fd81a212178ae33d4ff8a13b9df3c34d", "registry.access.redhat.com/openshift3/oauth-proxy:v3.10.34"]}, {"sizeBytes": 231329924, "names": ["registry.access.redhat.com/openshift3/prometheus-alertmanager@sha256:59d83911f4b502fb2da243de072e3eeca056ba4d52516d75e0cb1848e4621ed7", "registry.access.redhat.com/openshift3/prometheus-alertmanager:v3.10.34"]}, {"sizeBytes": 231249835, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:9c53e026026fc4134fbc73dc7cbc9835bfc9c6848da694c9d29534449066b653", "registry.access.redhat.com/openshift3/registry-console:v3.9"]}, {"sizeBytes": 230670018, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:eeb0bee077dc8c6d6552562431bd8e917cb9b9984455a0c0a98c8f20a4ef1bb4", "registry.access.redhat.com/openshift3/registry-console:v3.10"]}, {"sizeBytes": 227525342, "names": ["registry.access.redhat.com/openshift3/logging-curator:v3.7"]}, {"sizeBytes": 223765764, "names": ["registry.access.redhat.com/openshift3/logging-auth-proxy@sha256:ad1e43e76f02ddd3fc2e40592d6f6fa57cffeae4b5cc4138707bb6505e056b62", "registry.access.redhat.com/openshift3/logging-auth-proxy:v3.7"]}, {"sizeBytes": 222046071, "names": ["registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34"]}, {"sizeBytes": 217288704, "names": ["registry.access.redhat.com/openshift3/prometheus-alert-buffer@sha256:49b928d86e8d911dcb710676983aa2d966105fab320ef7b71f8c60439b28e583", "registry.access.redhat.com/openshift3/prometheus-alert-buffer:v3.10.34"]}, {"sizeBytes": 214236553, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34"]}, {"sizeBytes": 214175104, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41"]}, {"sizeBytes": 208859100, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23"]}, {"sizeBytes": 54277621, "names": ["docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-09-13T23:04:06Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:50:40Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-09-13T23:04:06Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:50:40Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-09-13T23:04:06Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:50:40Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:50:40Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:50:40Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:39:24Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:50:40Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-infra02.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-infra02.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "infra", "kubernetes.io/hostname": "sp-os-infra02.os.ad.scanplus.de", "node-role.kubernetes.io/infra": "true", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871051", "creationTimestamp": "2018-01-31T13:07:23Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "8e981b702db3988aa35f763d71f2112b"}, "selfLink": "/api/v1/nodes/sp-os-infra02.os.ad.scanplus.de", "uid": "ad31eeb0-0687-11e8-8e46-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (36 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-infra02.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-infra02.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "8e981b702db3988aa35f763d71f2112b", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-01-31T13:07:23Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-infra02.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/infra": "true", "region": "infra", "update.group": "even", "zone": "RZ-LM07" }, "name": "sp-os-infra02.os.ad.scanplus.de", "resourceVersion": "93871051", "selfLink": "/api/v1/nodes/sp-os-infra02.os.ad.scanplus.de", "uid": "ad31eeb0-0687-11e8-8e46-005056aa3492" }, "spec": { "externalID": "sp-os-infra02.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.242", "type": "InternalIP" }, { "address": "sp-os-infra02.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "5825524Ki", "pods": "250" }, "capacity": { "cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "5927924Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:50:40Z", "lastTransitionTime": "2018-09-13T23:04:06Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:50:40Z", "lastTransitionTime": "2018-09-13T23:04:06Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:40Z", "lastTransitionTime": "2018-09-13T23:04:06Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:40Z", "lastTransitionTime": "2019-01-09T14:50:40Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:50:40Z", "lastTransitionTime": "2018-09-13T22:39:24Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:e8f189677c3608469dd2ef8e0b9c87a7161322a17902f6e2289aa0a77adf8869", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.9.41" ], "sizeBytes": 1253410148 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:f002aad12ebbbe86fc846736c22432bf51fa78efc39514601e550974be8d89f4", "registry.access.redhat.com/openshift3/ose-deployer:v3.9.41" ], "sizeBytes": 1232772517 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:363c85bff3a7a9092d5df62ffac5a945d00b3544975631962a9c4adf80f938c3", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.7.23" ], "sizeBytes": 1078227309 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:3e36f0dc9e6c43b5e20e347c0e0cb590f263bc1ef7f925b7590d26a358f7f41e", "registry.access.redhat.com/openshift3/ose-deployer:v3.7.23" ], "sizeBytes": 1059096712 }, { "names": [ "registry.access.redhat.com/openshift3/ose@sha256:4c6d10c92d69d8445d9ede7c87ef1bb28c9e473d8624620e701e0a80d6091e92", "registry.access.redhat.com/openshift3/ose@sha256:a8652472480ccc592e774230e1b5e4dfaea3b330bee1ece452914c1830361b06", "registry.access.redhat.com/openshift3/ose:v3.7", "registry.access.redhat.com/openshift3/ose:v3.7.23" ], "sizeBytes": 1059094256 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:f09448e7c03254b309a56ac24f1194667d17d64699da144639ccbebef7301b45", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.10" ], "sizeBytes": 807879920 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10", "registry.access.redhat.com/openshift3/ose-deployer:v3.10.34" ], "sizeBytes": 788614541 }, { "names": [ "registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10" ], "sizeBytes": 788612067 }, { "names": [ "registry.access.redhat.com/openshift3/logging-kibana@sha256:4c00973b15883be9a95ee9fcc0412c3ccacd19e49681790dc6f592abd1b9889a", "registry.access.redhat.com/openshift3/logging-kibana:v3.7" ], "sizeBytes": 674269936 }, { "names": [ "registry.access.redhat.com/openshift3/ose-ansible-service-broker@sha256:0735c07fb871a6e03dfc606d7349bf497ddaf6e4f3df2a081052efb1c7ce5f1b", "registry.access.redhat.com/openshift3/ose-ansible-service-broker:v3.9.41" ], "sizeBytes": 548913095 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-dev/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8" ], "sizeBytes": 538578734 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:2c4eda089188c8f1b58768e24b4a3db182faaba39c30e35664d898d7434ba449" ], "sizeBytes": 538578732 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:8d76df83d61b207def5e131c324315957e5561009b833135b34dc79ce917c116" ], "sizeBytes": 538578403 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:f4a7abbbb56bc6629d486b86311c6c959c8055fef931b5caa87432e2b0da985a" ], "sizeBytes": 538578369 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:ca4bdf78afcf2cfce000e77e4e5173245d3222b20bfb481dbebc9d8141ad454f", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.7.23" ], "sizeBytes": 459352008 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:83348fb254e4de4783ca482aae13af27200aef08d1434515da24dbb8eb4b1f1b", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.9.41" ], "sizeBytes": 435574212 }, { "names": [ "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover@sha256:30728ae0d140912f0a68c39e74f695c333f9c6cd7d746639ccc9ca7ca8d63959", "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:v3.7.23" ], "sizeBytes": 385380226 }, { "names": [ "registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:0ebba15f6587d5270fc5fb18a7c3cdf7fb27c7e941e18564078811c1326b2a9b", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.9.41" ], "sizeBytes": 299475138 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34" ], "sizeBytes": 286681664 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7" ], "sizeBytes": 286138919 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:3a723441d5d82af63147027dd4d89d1b67fcb60bd1bc7c9bb55f4c5b8d1bc204", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.10" ], "sizeBytes": 283460958 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus@sha256:818df27c1cee709f6845e7be37ad44a8c8800517a019861397c0b57faa5cbe05", "registry.access.redhat.com/openshift3/prometheus:v3.10.34" ], "sizeBytes": 279296115 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:32055a6793fe35313d2fce4b4fbbb9424fc63411f532aef720563d682ac32531", "registry.access.redhat.com/openshift3/registry-console:v3.7" ], "sizeBytes": 256885327 }, { "names": [ "registry.access.redhat.com/openshift3/oauth-proxy@sha256:7631a6234544686912fb48417f4e7765fd81a212178ae33d4ff8a13b9df3c34d", "registry.access.redhat.com/openshift3/oauth-proxy:v3.10.34" ], "sizeBytes": 238450205 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-alertmanager@sha256:59d83911f4b502fb2da243de072e3eeca056ba4d52516d75e0cb1848e4621ed7", "registry.access.redhat.com/openshift3/prometheus-alertmanager:v3.10.34" ], "sizeBytes": 231329924 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:9c53e026026fc4134fbc73dc7cbc9835bfc9c6848da694c9d29534449066b653", "registry.access.redhat.com/openshift3/registry-console:v3.9" ], "sizeBytes": 231249835 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:eeb0bee077dc8c6d6552562431bd8e917cb9b9984455a0c0a98c8f20a4ef1bb4", "registry.access.redhat.com/openshift3/registry-console:v3.10" ], "sizeBytes": 230670018 }, { "names": [ "registry.access.redhat.com/openshift3/logging-curator:v3.7" ], "sizeBytes": 227525342 }, { "names": [ "registry.access.redhat.com/openshift3/logging-auth-proxy@sha256:ad1e43e76f02ddd3fc2e40592d6f6fa57cffeae4b5cc4138707bb6505e056b62", "registry.access.redhat.com/openshift3/logging-auth-proxy:v3.7" ], "sizeBytes": 223765764 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34" ], "sizeBytes": 222046071 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-alert-buffer@sha256:49b928d86e8d911dcb710676983aa2d966105fab320ef7b71f8c60439b28e583", "registry.access.redhat.com/openshift3/prometheus-alert-buffer:v3.10.34" ], "sizeBytes": 217288704 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34" ], "sizeBytes": 214236553 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41" ], "sizeBytes": 214175104 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23" ], "sizeBytes": 208859100 }, { "names": [ "docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590" ], "sizeBytes": 54277621 } ], "nodeInfo": { "architecture": "amd64", "bootID": "c13372fd-fff8-45f8-8e8d-eeb27a3a5575", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422A8563-28D9-EAC6-14A3-7E67E42D0ECC" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-infra02.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-infra02.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "2", "memory": "5927924Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.242"}, {"type": "Hostname", "address": "sp-os-infra02.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "c13372fd-fff8-45f8-8e8d-eeb27a3a5575", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422A8563-28D9-EAC6-14A3-7E67E42D0ECC", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "2", "memory": "5825524Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1253410148, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:e8f189677c3608469dd2ef8e0b9c87a7161322a17902f6e2289aa0a77adf8869", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.9.41"]}, {"sizeBytes": 1232772517, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:f002aad12ebbbe86fc846736c22432bf51fa78efc39514601e550974be8d89f4", "registry.access.redhat.com/openshift3/ose-deployer:v3.9.41"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 1078227309, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:363c85bff3a7a9092d5df62ffac5a945d00b3544975631962a9c4adf80f938c3", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.7.23"]}, {"sizeBytes": 1059096712, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:3e36f0dc9e6c43b5e20e347c0e0cb590f263bc1ef7f925b7590d26a358f7f41e", "registry.access.redhat.com/openshift3/ose-deployer:v3.7.23"]}, {"sizeBytes": 1059094256, "names": ["registry.access.redhat.com/openshift3/ose@sha256:4c6d10c92d69d8445d9ede7c87ef1bb28c9e473d8624620e701e0a80d6091e92", "registry.access.redhat.com/openshift3/ose@sha256:a8652472480ccc592e774230e1b5e4dfaea3b330bee1ece452914c1830361b06", "registry.access.redhat.com/openshift3/ose:v3.7", "registry.access.redhat.com/openshift3/ose:v3.7.23"]}, {"sizeBytes": 807879920, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:f09448e7c03254b309a56ac24f1194667d17d64699da144639ccbebef7301b45", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.10"]}, {"sizeBytes": 788614541, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10", "registry.access.redhat.com/openshift3/ose-deployer:v3.10.34"]}, {"sizeBytes": 788612067, "names": ["registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10"]}, {"sizeBytes": 674269936, "names": ["registry.access.redhat.com/openshift3/logging-kibana@sha256:4c00973b15883be9a95ee9fcc0412c3ccacd19e49681790dc6f592abd1b9889a", "registry.access.redhat.com/openshift3/logging-kibana:v3.7"]}, {"sizeBytes": 548913095, "names": ["registry.access.redhat.com/openshift3/ose-ansible-service-broker@sha256:0735c07fb871a6e03dfc606d7349bf497ddaf6e4f3df2a081052efb1c7ce5f1b", "registry.access.redhat.com/openshift3/ose-ansible-service-broker:v3.9.41"]}, {"sizeBytes": 538578734, "names": ["docker-registry.default.svc:5000/sp-netbox-dev/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8"]}, {"sizeBytes": 538578732, "names": ["docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:2c4eda089188c8f1b58768e24b4a3db182faaba39c30e35664d898d7434ba449"]}, {"sizeBytes": 538578403, "names": ["docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:8d76df83d61b207def5e131c324315957e5561009b833135b34dc79ce917c116"]}, {"sizeBytes": 538578369, "names": ["docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:f4a7abbbb56bc6629d486b86311c6c959c8055fef931b5caa87432e2b0da985a"]}, {"sizeBytes": 459352008, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:ca4bdf78afcf2cfce000e77e4e5173245d3222b20bfb481dbebc9d8141ad454f", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.7.23"]}, {"sizeBytes": 435574212, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:83348fb254e4de4783ca482aae13af27200aef08d1434515da24dbb8eb4b1f1b", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.9.41"]}, {"sizeBytes": 385380226, "names": ["registry.access.redhat.com/openshift3/ose-keepalived-ipfailover@sha256:30728ae0d140912f0a68c39e74f695c333f9c6cd7d746639ccc9ca7ca8d63959", "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:v3.7.23"]}, {"sizeBytes": 299475138, "names": ["registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:0ebba15f6587d5270fc5fb18a7c3cdf7fb27c7e941e18564078811c1326b2a9b", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.9.41"]}, {"sizeBytes": 286681664, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34"]}, {"sizeBytes": 286138919, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7"]}, {"sizeBytes": 283460958, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:3a723441d5d82af63147027dd4d89d1b67fcb60bd1bc7c9bb55f4c5b8d1bc204", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.10"]}, {"sizeBytes": 279296115, "names": ["registry.access.redhat.com/openshift3/prometheus@sha256:818df27c1cee709f6845e7be37ad44a8c8800517a019861397c0b57faa5cbe05", "registry.access.redhat.com/openshift3/prometheus:v3.10.34"]}, {"sizeBytes": 256885327, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:32055a6793fe35313d2fce4b4fbbb9424fc63411f532aef720563d682ac32531", "registry.access.redhat.com/openshift3/registry-console:v3.7"]}, {"sizeBytes": 238450205, "names": ["registry.access.redhat.com/openshift3/oauth-proxy@sha256:7631a6234544686912fb48417f4e7765fd81a212178ae33d4ff8a13b9df3c34d", "registry.access.redhat.com/openshift3/oauth-proxy:v3.10.34"]}, {"sizeBytes": 231329924, "names": ["registry.access.redhat.com/openshift3/prometheus-alertmanager@sha256:59d83911f4b502fb2da243de072e3eeca056ba4d52516d75e0cb1848e4621ed7", "registry.access.redhat.com/openshift3/prometheus-alertmanager:v3.10.34"]}, {"sizeBytes": 231249835, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:9c53e026026fc4134fbc73dc7cbc9835bfc9c6848da694c9d29534449066b653", "registry.access.redhat.com/openshift3/registry-console:v3.9"]}, {"sizeBytes": 230670018, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:eeb0bee077dc8c6d6552562431bd8e917cb9b9984455a0c0a98c8f20a4ef1bb4", "registry.access.redhat.com/openshift3/registry-console:v3.10"]}, {"sizeBytes": 227525342, "names": ["registry.access.redhat.com/openshift3/logging-curator:v3.7"]}, {"sizeBytes": 223765764, "names": ["registry.access.redhat.com/openshift3/logging-auth-proxy@sha256:ad1e43e76f02ddd3fc2e40592d6f6fa57cffeae4b5cc4138707bb6505e056b62", "registry.access.redhat.com/openshift3/logging-auth-proxy:v3.7"]}, {"sizeBytes": 222046071, "names": ["registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34"]}, {"sizeBytes": 217288704, "names": ["registry.access.redhat.com/openshift3/prometheus-alert-buffer@sha256:49b928d86e8d911dcb710676983aa2d966105fab320ef7b71f8c60439b28e583", "registry.access.redhat.com/openshift3/prometheus-alert-buffer:v3.10.34"]}, {"sizeBytes": 214236553, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34"]}, {"sizeBytes": 214175104, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41"]}, {"sizeBytes": 208859100, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23"]}, {"sizeBytes": 54277621, "names": ["docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-09-13T23:04:06Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:50:40Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-09-13T23:04:06Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:50:40Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-09-13T23:04:06Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:50:40Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:50:40Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:50:40Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:39:24Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:50:40Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-infra02.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-infra02.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "infra", "kubernetes.io/hostname": "sp-os-infra02.os.ad.scanplus.de", "node-role.kubernetes.io/infra": "true", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871051", "creationTimestamp": "2018-01-31T13:07:23Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "8e981b702db3988aa35f763d71f2112b"}, "selfLink": "/api/v1/nodes/sp-os-infra02.os.ad.scanplus.de", "uid": "ad31eeb0-0687-11e8-8e46-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (35 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-infra02.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-infra02.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "8e981b702db3988aa35f763d71f2112b", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-01-31T13:07:23Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-infra02.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/infra": "true", "region": "infra", "update.group": "even", "zone": "RZ-LM07" }, "name": "sp-os-infra02.os.ad.scanplus.de", "resourceVersion": "93871051", "selfLink": "/api/v1/nodes/sp-os-infra02.os.ad.scanplus.de", "uid": "ad31eeb0-0687-11e8-8e46-005056aa3492" }, "spec": { "externalID": "sp-os-infra02.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.242", "type": "InternalIP" }, { "address": "sp-os-infra02.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "5825524Ki", "pods": "250" }, "capacity": { "cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "5927924Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:50:40Z", "lastTransitionTime": "2018-09-13T23:04:06Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:50:40Z", "lastTransitionTime": "2018-09-13T23:04:06Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:40Z", "lastTransitionTime": "2018-09-13T23:04:06Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:40Z", "lastTransitionTime": "2019-01-09T14:50:40Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:50:40Z", "lastTransitionTime": "2018-09-13T22:39:24Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:e8f189677c3608469dd2ef8e0b9c87a7161322a17902f6e2289aa0a77adf8869", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.9.41" ], "sizeBytes": 1253410148 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:f002aad12ebbbe86fc846736c22432bf51fa78efc39514601e550974be8d89f4", "registry.access.redhat.com/openshift3/ose-deployer:v3.9.41" ], "sizeBytes": 1232772517 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:363c85bff3a7a9092d5df62ffac5a945d00b3544975631962a9c4adf80f938c3", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.7.23" ], "sizeBytes": 1078227309 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:3e36f0dc9e6c43b5e20e347c0e0cb590f263bc1ef7f925b7590d26a358f7f41e", "registry.access.redhat.com/openshift3/ose-deployer:v3.7.23" ], "sizeBytes": 1059096712 }, { "names": [ "registry.access.redhat.com/openshift3/ose@sha256:4c6d10c92d69d8445d9ede7c87ef1bb28c9e473d8624620e701e0a80d6091e92", "registry.access.redhat.com/openshift3/ose@sha256:a8652472480ccc592e774230e1b5e4dfaea3b330bee1ece452914c1830361b06", "registry.access.redhat.com/openshift3/ose:v3.7", "registry.access.redhat.com/openshift3/ose:v3.7.23" ], "sizeBytes": 1059094256 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:f09448e7c03254b309a56ac24f1194667d17d64699da144639ccbebef7301b45", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.10" ], "sizeBytes": 807879920 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10", "registry.access.redhat.com/openshift3/ose-deployer:v3.10.34" ], "sizeBytes": 788614541 }, { "names": [ "registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10" ], "sizeBytes": 788612067 }, { "names": [ "registry.access.redhat.com/openshift3/logging-kibana@sha256:4c00973b15883be9a95ee9fcc0412c3ccacd19e49681790dc6f592abd1b9889a", "registry.access.redhat.com/openshift3/logging-kibana:v3.7" ], "sizeBytes": 674269936 }, { "names": [ "registry.access.redhat.com/openshift3/ose-ansible-service-broker@sha256:0735c07fb871a6e03dfc606d7349bf497ddaf6e4f3df2a081052efb1c7ce5f1b", "registry.access.redhat.com/openshift3/ose-ansible-service-broker:v3.9.41" ], "sizeBytes": 548913095 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-dev/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8" ], "sizeBytes": 538578734 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:2c4eda089188c8f1b58768e24b4a3db182faaba39c30e35664d898d7434ba449" ], "sizeBytes": 538578732 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:8d76df83d61b207def5e131c324315957e5561009b833135b34dc79ce917c116" ], "sizeBytes": 538578403 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:f4a7abbbb56bc6629d486b86311c6c959c8055fef931b5caa87432e2b0da985a" ], "sizeBytes": 538578369 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:ca4bdf78afcf2cfce000e77e4e5173245d3222b20bfb481dbebc9d8141ad454f", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.7.23" ], "sizeBytes": 459352008 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:83348fb254e4de4783ca482aae13af27200aef08d1434515da24dbb8eb4b1f1b", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.9.41" ], "sizeBytes": 435574212 }, { "names": [ "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover@sha256:30728ae0d140912f0a68c39e74f695c333f9c6cd7d746639ccc9ca7ca8d63959", "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:v3.7.23" ], "sizeBytes": 385380226 }, { "names": [ "registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:0ebba15f6587d5270fc5fb18a7c3cdf7fb27c7e941e18564078811c1326b2a9b", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.9.41" ], "sizeBytes": 299475138 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34" ], "sizeBytes": 286681664 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7" ], "sizeBytes": 286138919 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:3a723441d5d82af63147027dd4d89d1b67fcb60bd1bc7c9bb55f4c5b8d1bc204", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.10" ], "sizeBytes": 283460958 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus@sha256:818df27c1cee709f6845e7be37ad44a8c8800517a019861397c0b57faa5cbe05", "registry.access.redhat.com/openshift3/prometheus:v3.10.34" ], "sizeBytes": 279296115 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:32055a6793fe35313d2fce4b4fbbb9424fc63411f532aef720563d682ac32531", "registry.access.redhat.com/openshift3/registry-console:v3.7" ], "sizeBytes": 256885327 }, { "names": [ "registry.access.redhat.com/openshift3/oauth-proxy@sha256:7631a6234544686912fb48417f4e7765fd81a212178ae33d4ff8a13b9df3c34d", "registry.access.redhat.com/openshift3/oauth-proxy:v3.10.34" ], "sizeBytes": 238450205 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-alertmanager@sha256:59d83911f4b502fb2da243de072e3eeca056ba4d52516d75e0cb1848e4621ed7", "registry.access.redhat.com/openshift3/prometheus-alertmanager:v3.10.34" ], "sizeBytes": 231329924 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:9c53e026026fc4134fbc73dc7cbc9835bfc9c6848da694c9d29534449066b653", "registry.access.redhat.com/openshift3/registry-console:v3.9" ], "sizeBytes": 231249835 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:eeb0bee077dc8c6d6552562431bd8e917cb9b9984455a0c0a98c8f20a4ef1bb4", "registry.access.redhat.com/openshift3/registry-console:v3.10" ], "sizeBytes": 230670018 }, { "names": [ "registry.access.redhat.com/openshift3/logging-curator:v3.7" ], "sizeBytes": 227525342 }, { "names": [ "registry.access.redhat.com/openshift3/logging-auth-proxy@sha256:ad1e43e76f02ddd3fc2e40592d6f6fa57cffeae4b5cc4138707bb6505e056b62", "registry.access.redhat.com/openshift3/logging-auth-proxy:v3.7" ], "sizeBytes": 223765764 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34" ], "sizeBytes": 222046071 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-alert-buffer@sha256:49b928d86e8d911dcb710676983aa2d966105fab320ef7b71f8c60439b28e583", "registry.access.redhat.com/openshift3/prometheus-alert-buffer:v3.10.34" ], "sizeBytes": 217288704 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34" ], "sizeBytes": 214236553 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41" ], "sizeBytes": 214175104 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23" ], "sizeBytes": 208859100 }, { "names": [ "docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590" ], "sizeBytes": 54277621 } ], "nodeInfo": { "architecture": "amd64", "bootID": "c13372fd-fff8-45f8-8e8d-eeb27a3a5575", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422A8563-28D9-EAC6-14A3-7E67E42D0ECC" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-infra02.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-infra02.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "2", "memory": "5927924Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.242"}, {"type": "Hostname", "address": "sp-os-infra02.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "c13372fd-fff8-45f8-8e8d-eeb27a3a5575", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422A8563-28D9-EAC6-14A3-7E67E42D0ECC", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "2", "memory": "5825524Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1253410148, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:e8f189677c3608469dd2ef8e0b9c87a7161322a17902f6e2289aa0a77adf8869", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.9.41"]}, {"sizeBytes": 1232772517, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:f002aad12ebbbe86fc846736c22432bf51fa78efc39514601e550974be8d89f4", "registry.access.redhat.com/openshift3/ose-deployer:v3.9.41"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 1078227309, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:363c85bff3a7a9092d5df62ffac5a945d00b3544975631962a9c4adf80f938c3", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.7.23"]}, {"sizeBytes": 1059096712, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:3e36f0dc9e6c43b5e20e347c0e0cb590f263bc1ef7f925b7590d26a358f7f41e", "registry.access.redhat.com/openshift3/ose-deployer:v3.7.23"]}, {"sizeBytes": 1059094256, "names": ["registry.access.redhat.com/openshift3/ose@sha256:4c6d10c92d69d8445d9ede7c87ef1bb28c9e473d8624620e701e0a80d6091e92", "registry.access.redhat.com/openshift3/ose@sha256:a8652472480ccc592e774230e1b5e4dfaea3b330bee1ece452914c1830361b06", "registry.access.redhat.com/openshift3/ose:v3.7", "registry.access.redhat.com/openshift3/ose:v3.7.23"]}, {"sizeBytes": 807879920, "names": ["registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:f09448e7c03254b309a56ac24f1194667d17d64699da144639ccbebef7301b45", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.10"]}, {"sizeBytes": 788614541, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10", "registry.access.redhat.com/openshift3/ose-deployer:v3.10.34"]}, {"sizeBytes": 788612067, "names": ["registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10"]}, {"sizeBytes": 674269936, "names": ["registry.access.redhat.com/openshift3/logging-kibana@sha256:4c00973b15883be9a95ee9fcc0412c3ccacd19e49681790dc6f592abd1b9889a", "registry.access.redhat.com/openshift3/logging-kibana:v3.7"]}, {"sizeBytes": 548913095, "names": ["registry.access.redhat.com/openshift3/ose-ansible-service-broker@sha256:0735c07fb871a6e03dfc606d7349bf497ddaf6e4f3df2a081052efb1c7ce5f1b", "registry.access.redhat.com/openshift3/ose-ansible-service-broker:v3.9.41"]}, {"sizeBytes": 538578734, "names": ["docker-registry.default.svc:5000/sp-netbox-dev/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8"]}, {"sizeBytes": 538578732, "names": ["docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:2c4eda089188c8f1b58768e24b4a3db182faaba39c30e35664d898d7434ba449"]}, {"sizeBytes": 538578403, "names": ["docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:8d76df83d61b207def5e131c324315957e5561009b833135b34dc79ce917c116"]}, {"sizeBytes": 538578369, "names": ["docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:f4a7abbbb56bc6629d486b86311c6c959c8055fef931b5caa87432e2b0da985a"]}, {"sizeBytes": 459352008, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:ca4bdf78afcf2cfce000e77e4e5173245d3222b20bfb481dbebc9d8141ad454f", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.7.23"]}, {"sizeBytes": 435574212, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:83348fb254e4de4783ca482aae13af27200aef08d1434515da24dbb8eb4b1f1b", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.9.41"]}, {"sizeBytes": 385380226, "names": ["registry.access.redhat.com/openshift3/ose-keepalived-ipfailover@sha256:30728ae0d140912f0a68c39e74f695c333f9c6cd7d746639ccc9ca7ca8d63959", "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:v3.7.23"]}, {"sizeBytes": 299475138, "names": ["registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:0ebba15f6587d5270fc5fb18a7c3cdf7fb27c7e941e18564078811c1326b2a9b", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.9.41"]}, {"sizeBytes": 286681664, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34"]}, {"sizeBytes": 286138919, "names": ["registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7"]}, {"sizeBytes": 283460958, "names": ["registry.access.redhat.com/openshift3/ose-docker-registry@sha256:3a723441d5d82af63147027dd4d89d1b67fcb60bd1bc7c9bb55f4c5b8d1bc204", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.10"]}, {"sizeBytes": 279296115, "names": ["registry.access.redhat.com/openshift3/prometheus@sha256:818df27c1cee709f6845e7be37ad44a8c8800517a019861397c0b57faa5cbe05", "registry.access.redhat.com/openshift3/prometheus:v3.10.34"]}, {"sizeBytes": 256885327, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:32055a6793fe35313d2fce4b4fbbb9424fc63411f532aef720563d682ac32531", "registry.access.redhat.com/openshift3/registry-console:v3.7"]}, {"sizeBytes": 238450205, "names": ["registry.access.redhat.com/openshift3/oauth-proxy@sha256:7631a6234544686912fb48417f4e7765fd81a212178ae33d4ff8a13b9df3c34d", "registry.access.redhat.com/openshift3/oauth-proxy:v3.10.34"]}, {"sizeBytes": 231329924, "names": ["registry.access.redhat.com/openshift3/prometheus-alertmanager@sha256:59d83911f4b502fb2da243de072e3eeca056ba4d52516d75e0cb1848e4621ed7", "registry.access.redhat.com/openshift3/prometheus-alertmanager:v3.10.34"]}, {"sizeBytes": 231249835, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:9c53e026026fc4134fbc73dc7cbc9835bfc9c6848da694c9d29534449066b653", "registry.access.redhat.com/openshift3/registry-console:v3.9"]}, {"sizeBytes": 230670018, "names": ["registry.access.redhat.com/openshift3/registry-console@sha256:eeb0bee077dc8c6d6552562431bd8e917cb9b9984455a0c0a98c8f20a4ef1bb4", "registry.access.redhat.com/openshift3/registry-console:v3.10"]}, {"sizeBytes": 227525342, "names": ["registry.access.redhat.com/openshift3/logging-curator:v3.7"]}, {"sizeBytes": 223765764, "names": ["registry.access.redhat.com/openshift3/logging-auth-proxy@sha256:ad1e43e76f02ddd3fc2e40592d6f6fa57cffeae4b5cc4138707bb6505e056b62", "registry.access.redhat.com/openshift3/logging-auth-proxy:v3.7"]}, {"sizeBytes": 222046071, "names": ["registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34"]}, {"sizeBytes": 217288704, "names": ["registry.access.redhat.com/openshift3/prometheus-alert-buffer@sha256:49b928d86e8d911dcb710676983aa2d966105fab320ef7b71f8c60439b28e583", "registry.access.redhat.com/openshift3/prometheus-alert-buffer:v3.10.34"]}, {"sizeBytes": 214236553, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34"]}, {"sizeBytes": 214175104, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41"]}, {"sizeBytes": 208859100, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23"]}, {"sizeBytes": 54277621, "names": ["docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-09-13T23:04:06Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:50:50Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-09-13T23:04:06Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:50:50Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-09-13T23:04:06Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:50:50Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "True", "lastTransitionTime": "2019-01-09T14:50:50Z", "reason": "KubeletReady", "lastHeartbeatTime": "2019-01-09T14:50:50Z", "message": "kubelet is posting ready status", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:39:24Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:50:50Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-infra02.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-infra02.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "infra", "kubernetes.io/hostname": "sp-os-infra02.os.ad.scanplus.de", "node-role.kubernetes.io/infra": "true", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871103", "creationTimestamp": "2018-01-31T13:07:23Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "8e981b702db3988aa35f763d71f2112b"}, "selfLink": "/api/v1/nodes/sp-os-infra02.os.ad.scanplus.de", "uid": "ad31eeb0-0687-11e8-8e46-005056aa3492"}}]}}\n', '') ok: [sp-os-infra02.os.ad.scanplus.de -> sp-os-master01.os.ad.scanplus.de] => { "attempts": 3, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-infra02.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-infra02.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "8e981b702db3988aa35f763d71f2112b", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-01-31T13:07:23Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-infra02.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/infra": "true", "region": "infra", "update.group": "even", "zone": "RZ-LM07" }, "name": "sp-os-infra02.os.ad.scanplus.de", "resourceVersion": "93871103", "selfLink": "/api/v1/nodes/sp-os-infra02.os.ad.scanplus.de", "uid": "ad31eeb0-0687-11e8-8e46-005056aa3492" }, "spec": { "externalID": "sp-os-infra02.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.242", "type": "InternalIP" }, { "address": "sp-os-infra02.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "5825524Ki", "pods": "250" }, "capacity": { "cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "5927924Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:50:50Z", "lastTransitionTime": "2018-09-13T23:04:06Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:50:50Z", "lastTransitionTime": "2018-09-13T23:04:06Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:50Z", "lastTransitionTime": "2018-09-13T23:04:06Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:50Z", "lastTransitionTime": "2019-01-09T14:50:50Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:50:50Z", "lastTransitionTime": "2018-09-13T22:39:24Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:e8f189677c3608469dd2ef8e0b9c87a7161322a17902f6e2289aa0a77adf8869", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.9.41" ], "sizeBytes": 1253410148 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:f002aad12ebbbe86fc846736c22432bf51fa78efc39514601e550974be8d89f4", "registry.access.redhat.com/openshift3/ose-deployer:v3.9.41" ], "sizeBytes": 1232772517 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:363c85bff3a7a9092d5df62ffac5a945d00b3544975631962a9c4adf80f938c3", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.7.23" ], "sizeBytes": 1078227309 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:3e36f0dc9e6c43b5e20e347c0e0cb590f263bc1ef7f925b7590d26a358f7f41e", "registry.access.redhat.com/openshift3/ose-deployer:v3.7.23" ], "sizeBytes": 1059096712 }, { "names": [ "registry.access.redhat.com/openshift3/ose@sha256:4c6d10c92d69d8445d9ede7c87ef1bb28c9e473d8624620e701e0a80d6091e92", "registry.access.redhat.com/openshift3/ose@sha256:a8652472480ccc592e774230e1b5e4dfaea3b330bee1ece452914c1830361b06", "registry.access.redhat.com/openshift3/ose:v3.7", "registry.access.redhat.com/openshift3/ose:v3.7.23" ], "sizeBytes": 1059094256 }, { "names": [ "registry.access.redhat.com/openshift3/ose-haproxy-router@sha256:f09448e7c03254b309a56ac24f1194667d17d64699da144639ccbebef7301b45", "registry.access.redhat.com/openshift3/ose-haproxy-router:v3.10" ], "sizeBytes": 807879920 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10", "registry.access.redhat.com/openshift3/ose-deployer:v3.10.34" ], "sizeBytes": 788614541 }, { "names": [ "registry.access.redhat.com/openshift3/ose-control-plane@sha256:3d0b24963b4099bb06e6bf70cd0096c3c332dd814dd6497b60f4ca5902473ca5", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10" ], "sizeBytes": 788612067 }, { "names": [ "registry.access.redhat.com/openshift3/logging-kibana@sha256:4c00973b15883be9a95ee9fcc0412c3ccacd19e49681790dc6f592abd1b9889a", "registry.access.redhat.com/openshift3/logging-kibana:v3.7" ], "sizeBytes": 674269936 }, { "names": [ "registry.access.redhat.com/openshift3/ose-ansible-service-broker@sha256:0735c07fb871a6e03dfc606d7349bf497ddaf6e4f3df2a081052efb1c7ce5f1b", "registry.access.redhat.com/openshift3/ose-ansible-service-broker:v3.9.41" ], "sizeBytes": 548913095 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-dev/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8", "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:143d5da0a09a2afb740402d2dea252053feff892ea5c1ae17c205a57a5ddbcd8" ], "sizeBytes": 538578734 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:2c4eda089188c8f1b58768e24b4a3db182faaba39c30e35664d898d7434ba449" ], "sizeBytes": 538578732 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:8d76df83d61b207def5e131c324315957e5561009b833135b34dc79ce917c116" ], "sizeBytes": 538578403 }, { "names": [ "docker-registry.default.svc:5000/sp-netbox-prod/netbox-adauth@sha256:f4a7abbbb56bc6629d486b86311c6c959c8055fef931b5caa87432e2b0da985a" ], "sizeBytes": 538578369 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:ca4bdf78afcf2cfce000e77e4e5173245d3222b20bfb481dbebc9d8141ad454f", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.7.23" ], "sizeBytes": 459352008 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:83348fb254e4de4783ca482aae13af27200aef08d1434515da24dbb8eb4b1f1b", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.9.41" ], "sizeBytes": 435574212 }, { "names": [ "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover@sha256:30728ae0d140912f0a68c39e74f695c333f9c6cd7d746639ccc9ca7ca8d63959", "registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:v3.7.23" ], "sizeBytes": 385380226 }, { "names": [ "registry.access.redhat.com/openshift3/ose-template-service-broker@sha256:0ebba15f6587d5270fc5fb18a7c3cdf7fb27c7e941e18564078811c1326b2a9b", "registry.access.redhat.com/openshift3/ose-template-service-broker:v3.9.41" ], "sizeBytes": 299475138 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:de7b851188e6685066194341ced34f6cf24d3e44a82bbc363fb1ff7655f8c764", "registry.access.redhat.com/openshift3/logging-fluentd:v3.10.34" ], "sizeBytes": 286681664 }, { "names": [ "registry.access.redhat.com/openshift3/logging-fluentd@sha256:24675c138a7529041b32650932f3969590f927b21fbaba1c2072075fa881c6a2", "registry.access.redhat.com/openshift3/logging-fluentd:v3.7" ], "sizeBytes": 286138919 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-registry@sha256:3a723441d5d82af63147027dd4d89d1b67fcb60bd1bc7c9bb55f4c5b8d1bc204", "registry.access.redhat.com/openshift3/ose-docker-registry:v3.10" ], "sizeBytes": 283460958 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus@sha256:818df27c1cee709f6845e7be37ad44a8c8800517a019861397c0b57faa5cbe05", "registry.access.redhat.com/openshift3/prometheus:v3.10.34" ], "sizeBytes": 279296115 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:32055a6793fe35313d2fce4b4fbbb9424fc63411f532aef720563d682ac32531", "registry.access.redhat.com/openshift3/registry-console:v3.7" ], "sizeBytes": 256885327 }, { "names": [ "registry.access.redhat.com/openshift3/oauth-proxy@sha256:7631a6234544686912fb48417f4e7765fd81a212178ae33d4ff8a13b9df3c34d", "registry.access.redhat.com/openshift3/oauth-proxy:v3.10.34" ], "sizeBytes": 238450205 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-alertmanager@sha256:59d83911f4b502fb2da243de072e3eeca056ba4d52516d75e0cb1848e4621ed7", "registry.access.redhat.com/openshift3/prometheus-alertmanager:v3.10.34" ], "sizeBytes": 231329924 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:9c53e026026fc4134fbc73dc7cbc9835bfc9c6848da694c9d29534449066b653", "registry.access.redhat.com/openshift3/registry-console:v3.9" ], "sizeBytes": 231249835 }, { "names": [ "registry.access.redhat.com/openshift3/registry-console@sha256:eeb0bee077dc8c6d6552562431bd8e917cb9b9984455a0c0a98c8f20a4ef1bb4", "registry.access.redhat.com/openshift3/registry-console:v3.10" ], "sizeBytes": 230670018 }, { "names": [ "registry.access.redhat.com/openshift3/logging-curator:v3.7" ], "sizeBytes": 227525342 }, { "names": [ "registry.access.redhat.com/openshift3/logging-auth-proxy@sha256:ad1e43e76f02ddd3fc2e40592d6f6fa57cffeae4b5cc4138707bb6505e056b62", "registry.access.redhat.com/openshift3/logging-auth-proxy:v3.7" ], "sizeBytes": 223765764 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34" ], "sizeBytes": 222046071 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-alert-buffer@sha256:49b928d86e8d911dcb710676983aa2d966105fab320ef7b71f8c60439b28e583", "registry.access.redhat.com/openshift3/prometheus-alert-buffer:v3.10.34" ], "sizeBytes": 217288704 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34" ], "sizeBytes": 214236553 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:c04b52b62eb99ee9cd75d91eb09b43a896e8ea87603d04b157f5d83c248eeed1", "registry.access.redhat.com/openshift3/ose-pod:v3.9.41" ], "sizeBytes": 214175104 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:f06dd73c4a4cbf18a409ab7f924bac8125d342df847f9fe221549084cabce9bd", "registry.access.redhat.com/openshift3/ose-pod:v3.7.23" ], "sizeBytes": 208859100 }, { "names": [ "docker.io/nginx@sha256:5aadb68304a38a8e2719605e4e180413f390cd6647602bee9bdedd59753c3590" ], "sizeBytes": 54277621 } ], "nodeInfo": { "architecture": "amd64", "bootID": "c13372fd-fff8-45f8-8e8d-eeb27a3a5575", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422A8563-28D9-EAC6-14A3-7E67E42D0ECC" } } } ], "returncode": 0 }, "state": "list" } META: ran handlers META: ran handlers PLAY [Restart nodes] ******************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [restart node] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:32 Wednesday 09 January 2019 15:50:51 +0100 (0:00:11.410) 0:11:26.086 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node02.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TimeoutStopUSec": "1min 30s", "ControlGroup": "/system.slice/atomic-openshift-node.service", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ActiveExitTimestamp": "Wed 2019-01-09 14:51:38 CET", "ExecMainCode": "0", "UnitFileState": "enabled", "ExecMainPID": "90887", "LimitSIGPENDING": "63379", "FileDescriptorStoreMax": "0", "LoadState": "loaded", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "10162063056341", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "10162063056488", "StandardError": "inherit", "AssertTimestamp": "Wed 2019-01-09 14:51:38 CET", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "InactiveExitTimestamp": "Wed 2019-01-09 14:51:38 CET", "WatchdogTimestamp": "Wed 2019-01-09 14:51:39 CET", "NoNewPrivileges": "no", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "Before": "multi-user.target shutdown.target", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "10162062436941", "SendSIGHUP": "no", "TimeoutStartUSec": "5min", "Type": "notify", "SyslogPriority": "30", "SameProcessGroup": "no", "MountFlags": "0", "LimitNPROC": "63379", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "ExecMainStartTimestamp": "Wed 2019-01-09 14:51:38 CET", "SyslogIdentifier": "atomic-openshift-node", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "-999", "Documentation": "https://github.com/openshift/origin", "StartLimitBurst": "5", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "SecureBits": "0", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "running", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestampMonotonic": "10162062434552", "MainPID": "90887", "StartupBlockIOWeight": "18446744073709551615", "ActiveEnterTimestamp": "Wed 2019-01-09 14:51:39 CET", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "active", "Nice": "0", "LimitDATA": "18446744073709551615", "UnitFilePreset": "disabled", "MemoryCurrent": "95842304", "LimitRTTIME": "18446744073709551615", "WantedBy": "multi-user.target", "TasksCurrent": "18446744073709551615", "RestartUSec": "5s", "ConditionTimestamp": "Wed 2019-01-09 14:51:38 CET", "CPUAccounting": "yes", "RemainAfterExit": "no", "RequiresMountsFor": "/var/lib/origin", "PrivateNetwork": "no", "Restart": "always", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "RefuseManualStop": "no", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "InactiveEnterTimestamp": "Wed 2019-01-09 14:51:38 CET", "StandardInput": "null", "AssertTimestampMonotonic": "10162062434552", "DefaultDependencies": "yes", "Requires": "-.mount basic.target var.mount", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "Slice": "system.slice", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "main", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "PrivateTmp": "no", "OnFailureJobMode": "replace", "AssertResult": "yes", "LimitLOCKS": "18446744073709551615", "ExecMainStartTimestampMonotonic": "10162062436849", "AllowIsolate": "no", "Wants": "docker.service dnsmasq.service system.slice", "After": "var.mount systemd-journald.socket chronyd.service basic.target system.slice -.mount dnsmasq.service docker.service ntpd.service", "FailureAction": "none", "CanIsolate": "no", "Conflicts": "shutdown.target", "StandardOutput": "journal", "WorkingDirectory": "/var/lib/origin", "InactiveEnterTimestampMonotonic": "10162062421726", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "Transient": "no", "IOScheduling": "0", "Description": "OpenShift Node", "ActiveExitTimestampMonotonic": "10162062401916", "CanReload": "no", "ControlPID": "0", "LimitNICE": "0", "BlockIOWeight": "18446744073709551615", "Names": "atomic-openshift-node.service", "ProtectSystem": "no", "PrivateDevices": "no", "Id": "atomic-openshift-node.service"}, "invocation": {"module_args": {"daemon-reload": true, "force": null, "name": "atomic-openshift-node", "enabled": null, "daemon_reload": true, "state": "restarted", "no_block": false, "user": false, "masked": null}}, "state": "started", "changed": true, "name": "atomic-openshift-node"}\n', '') changed: [sp-os-node02.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "daemon-reload": true, "daemon_reload": true, "enabled": null, "force": null, "masked": null, "name": "atomic-openshift-node", "no_block": false, "state": "restarted", "user": false } }, "name": "atomic-openshift-node", "state": "started", "status": { "ActiveEnterTimestamp": "Wed 2019-01-09 14:51:39 CET", "ActiveEnterTimestampMonotonic": "10162063056488", "ActiveExitTimestamp": "Wed 2019-01-09 14:51:38 CET", "ActiveExitTimestampMonotonic": "10162062401916", "ActiveState": "active", "After": "var.mount systemd-journald.socket chronyd.service basic.target system.slice -.mount dnsmasq.service docker.service ntpd.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-01-09 14:51:38 CET", "AssertTimestampMonotonic": "10162062434552", "Before": "multi-user.target shutdown.target", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-01-09 14:51:38 CET", "ConditionTimestampMonotonic": "10162062434552", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/atomic-openshift-node.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "OpenShift Node", "DevicePolicy": "auto", "Documentation": "https://github.com/openshift/origin", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "90887", "ExecMainStartTimestamp": "Wed 2019-01-09 14:51:38 CET", "ExecMainStartTimestampMonotonic": "10162062436849", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "atomic-openshift-node.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-01-09 14:51:38 CET", "InactiveEnterTimestampMonotonic": "10162062421726", "InactiveExitTimestamp": "Wed 2019-01-09 14:51:38 CET", "InactiveExitTimestampMonotonic": "10162062436941", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "63379", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "63379", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "90887", "MemoryAccounting": "yes", "MemoryCurrent": "95842304", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "atomic-openshift-node.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "-999", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "-.mount basic.target var.mount", "RequiresMountsFor": "/var/lib/origin", "Restart": "always", "RestartUSec": "5s", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogIdentifier": "atomic-openshift-node", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker.service dnsmasq.service system.slice", "WatchdogTimestamp": "Wed 2019-01-09 14:51:39 CET", "WatchdogTimestampMonotonic": "10162063056341", "WatchdogUSec": "0", "WorkingDirectory": "/var/lib/origin" } } TASK [Wait for node to be ready] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:38 Wednesday 09 January 2019 15:50:53 +0100 (0:00:01.498) 0:11:27.585 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node02.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node02.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.244"}, {"type": "Hostname", "address": "sp-os-node02.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "b7737317-0428-4a33-b663-e2587a5ef96f", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422A8A5B-A1AA-8149-4125-3576F319A9BB", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-09-13T23:04:20Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:50:53Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-09-13T23:04:20Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:50:53Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T13:51:39Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:50:53Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:50:53Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:50:53Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:39:23Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:50:53Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node02.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node02.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node02.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871126", "creationTimestamp": "2018-01-31T13:07:22Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node02.os.ad.scanplus.de", "uid": "ad1fd267-0687-11e8-8e46-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (36 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node02.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node02.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-01-31T13:07:22Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node02.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "even", "zone": "RZ-LM07" }, "name": "sp-os-node02.os.ad.scanplus.de", "resourceVersion": "93871126", "selfLink": "/api/v1/nodes/sp-os-node02.os.ad.scanplus.de", "uid": "ad1fd267-0687-11e8-8e46-005056aa3492" }, "spec": { "externalID": "sp-os-node02.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.244", "type": "InternalIP" }, { "address": "sp-os-node02.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:50:53Z", "lastTransitionTime": "2018-09-13T23:04:20Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:50:53Z", "lastTransitionTime": "2018-09-13T23:04:20Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:53Z", "lastTransitionTime": "2019-01-09T13:51:39Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:53Z", "lastTransitionTime": "2019-01-09T14:50:53Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:50:53Z", "lastTransitionTime": "2018-09-13T22:39:23Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "b7737317-0428-4a33-b663-e2587a5ef96f", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422A8A5B-A1AA-8149-4125-3576F319A9BB" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node02.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node02.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.244"}, {"type": "Hostname", "address": "sp-os-node02.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "b7737317-0428-4a33-b663-e2587a5ef96f", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422A8A5B-A1AA-8149-4125-3576F319A9BB", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-09-13T23:04:20Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:50:53Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-09-13T23:04:20Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:50:53Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T13:51:39Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:50:53Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:50:53Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:50:53Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:39:23Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:50:53Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node02.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node02.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node02.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871126", "creationTimestamp": "2018-01-31T13:07:22Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node02.os.ad.scanplus.de", "uid": "ad1fd267-0687-11e8-8e46-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (35 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node02.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node02.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-01-31T13:07:22Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node02.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "even", "zone": "RZ-LM07" }, "name": "sp-os-node02.os.ad.scanplus.de", "resourceVersion": "93871126", "selfLink": "/api/v1/nodes/sp-os-node02.os.ad.scanplus.de", "uid": "ad1fd267-0687-11e8-8e46-005056aa3492" }, "spec": { "externalID": "sp-os-node02.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.244", "type": "InternalIP" }, { "address": "sp-os-node02.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:50:53Z", "lastTransitionTime": "2018-09-13T23:04:20Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:50:53Z", "lastTransitionTime": "2018-09-13T23:04:20Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:53Z", "lastTransitionTime": "2019-01-09T13:51:39Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:50:53Z", "lastTransitionTime": "2019-01-09T14:50:53Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:50:53Z", "lastTransitionTime": "2018-09-13T22:39:23Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "b7737317-0428-4a33-b663-e2587a5ef96f", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422A8A5B-A1AA-8149-4125-3576F319A9BB" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node02.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node02.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.244"}, {"type": "Hostname", "address": "sp-os-node02.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "b7737317-0428-4a33-b663-e2587a5ef96f", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422A8A5B-A1AA-8149-4125-3576F319A9BB", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1889747575, "names": ["docker-registry.default.svc:5000/automation-prodtest/autopython35_networkapi@sha256:afbdea5ea505f5a1604e9f08a4805bc749c3b3aa6ecc990517c8fce0bbd03423"]}, {"sizeBytes": 1839139266, "names": ["docker-registry.default.svc:5000/automation-gleim/networkapi@sha256:3b748fdb5c05c4e18109aab2b53327deba8e9d4743becfac875323b2cf51895c", "docker-registry.default.svc:5000/automation-gleim/networkapi:latest"]}, {"sizeBytes": 1838632466, "names": ["docker-registry.default.svc:5000/automation-gleim/autopython35_networkapi@sha256:c5aada4116d01acaae3c86329a2b1fcf6f0f7c95547e27b485195ced8befbeba"]}, {"sizeBytes": 1838632433, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:6573ce619dbbb908583af241a3f02b1502a5d586f3b7f49094cf627c6912ba26"]}, {"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1238893950, "names": ["docker-registry.default.svc:5000/automation-maier-blu/aidabluworkflows@sha256:2a71d103bcdc159f16a0166f1d81f45507fd1716f20872bc0657b4af53e5377b"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 1168831034, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:cd02dd0022aa91941dba88f0ccc5ddccf54c99756235f478495039b77c23d8cc"]}, {"sizeBytes": 1168830389, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:478d2772bf6dc0a4f89d498d54779ab63d26849529f44671dceb7bcf79d4bc99", "docker-registry.default.svc:5000/automation-schoenthaler/networkapi:latest"]}, {"sizeBytes": 971853973, "names": ["docker-registry.default.svc:5000/automation-prodtest/autopython35_taggingclient@sha256:bce082c6697792bf0a51f8c60e3072624c682858ce60cc4b0fe95e0b32dac922", "docker-registry.default.svc:5000/automation-prodtest/autopython35_taggingclient:latest"]}, {"sizeBytes": 971853606, "names": ["docker-registry.default.svc:5000/automation-ziesel/autopython35_taggingclient@sha256:0c4af808fc9ce2eba6e40ddef31e5ceb2021edd72ade164e9c48e70d9b4a0a39"]}, {"sizeBytes": 881870541, "names": ["docker-registry.default.svc:5000/automation-haertenstein/automationapi@sha256:49885c95984f8a03f0cb024ae7f15818cce7a7167f1c43039bbf65cf1af0c01d"]}, {"sizeBytes": 877840275, "names": ["docker-registry.default.svc:5000/automation-haertenstein/aciapi@sha256:a0656e96f55a27775ef72c7af534244cecacc60dc1c4e7e7a6bfa8857ad65a1f"]}, {"sizeBytes": 877838781, "names": ["docker-registry.default.svc:5000/automation-prodtest/aciapi@sha256:5592eb4f798092f2b8d90b4c2090a06f7c3118867bb3772e3cf14dc87311daef"]}, {"sizeBytes": 877802186, "names": ["docker-registry.default.svc:5000/automation-maier/ftpclient@sha256:2d099141371733ec4eab4914d2af86ce2088d5fc601d6738ae0be7c10fb3c901"]}, {"sizeBytes": 877796412, "names": ["docker-registry.default.svc:5000/automation-ziesel/ftpclient@sha256:f5aec16035defd3b78dae96b2a243333d05de3462385d05a9bb55364b6ee1d6f"]}, {"sizeBytes": 877794526, "names": ["docker-registry.default.svc:5000/automation-gleim/vcenterfileclient@sha256:1ae161d08e1be8d03ef906215042fac845596e345445caf4d638214a139c7a10"]}, {"sizeBytes": 877794354, "names": ["docker-registry.default.svc:5000/automation-prod/vcenterfileclient@sha256:d6a637d8e57cc73d7780ae24ea91e78c3acb90b8520c597edd7bc022bdab9d0d"]}, {"sizeBytes": 877793199, "names": ["docker-registry.default.svc:5000/automation-haertenstein/ftpclient@sha256:7e3054758eb12cae27e663bd4536148ed0e8bfe13b891f18096fa3240870afda"]}, {"sizeBytes": 877716514, "names": ["docker-registry.default.svc:5000/automation-maier/autopython35@sha256:0c71d5251991e6fde82e4fe9d2413ff2f25d092eb2f7aa968f9294dc593d4bcd", "docker-registry.default.svc:5000/automation-maier/autopython35:latest"]}, {"sizeBytes": 877705841, "names": ["docker-registry.default.svc:5000/automation-prodtest/autopython35@sha256:acc862ede3fd33ee14d159244406554f96e1e7c5a4aade11c48b3934e49b8a78"]}, {"sizeBytes": 876895339, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/dnsclient@sha256:dd3f91b491786c9adb83d7a07508a1ae34d931ae8c4c8bf75101227e2f768eac"]}, {"sizeBytes": 876893313, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/vcenterfileclient@sha256:8db4855733cbac4bb9ee5b4f64a50b222499f7da0171c3bb0b3b417ad369a092"]}, {"sizeBytes": 817543822, "names": ["docker-registry.default.svc:5000/automation-paul/autopython35_networkapi@sha256:5f657c52f7a0e9d460b4623734de381e29ad3a1e116e0ef4e44bbdbc9ffe2802", "docker-registry.default.svc:5000/automation-puscasu/autopython35_networkapi@sha256:5f657c52f7a0e9d460b4623734de381e29ad3a1e116e0ef4e44bbdbc9ffe2802"]}, {"sizeBytes": 814403688, "names": ["docker-registry.default.svc:5000/automation-develop/autopython35_networkapi@sha256:bfe73eb38d007ec3566e2f657640c5e85405926baa14dd9afe0f6135a8506c07", "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi@sha256:bfe73eb38d007ec3566e2f657640c5e85405926baa14dd9afe0f6135a8506c07"]}, {"sizeBytes": 813911535, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe", "docker-registry.default.svc:5000/automation-prod/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe"]}, {"sizeBytes": 813911394, "names": ["docker-registry.default.svc:5000/automation-develop/autopython35_networkapi@sha256:f1dc698b6363c074bf39f87b4ef9f2deae24b9861361fefa22c8d465ef264b23", "docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:f1dc698b6363c074bf39f87b4ef9f2deae24b9861361fefa22c8d465ef264b23"]}, {"sizeBytes": 813910911, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:7e6aa52152ae77df12068d8fc7bd4173df307e46417346b4772c94691ff002a8", "docker-registry.default.svc:5000/automation/autopython35_networkapi@sha256:7e6aa52152ae77df12068d8fc7bd4173df307e46417346b4772c94691ff002a8"]}, {"sizeBytes": 813873837, "names": ["docker-registry.default.svc:5000/automation-ape/autopython35_networkapi@sha256:99c8538f0559e1e3c70a1b84d9850eb1087af7e061d2480364fda0794f522a15", "docker-registry.default.svc:5000/automation-develop/autopython35_networkapi@sha256:99c8538f0559e1e3c70a1b84d9850eb1087af7e061d2480364fda0794f522a15"]}, {"sizeBytes": 810375991, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35_sshclient@sha256:8bac746af9fd859d4ac8a47b58fcbc6a4a614e420d8b28b3d5984378cdc932af", "docker-registry.default.svc:5000/automation-prod/autopython35_sshclient@sha256:8bac746af9fd859d4ac8a47b58fcbc6a4a614e420d8b28b3d5984378cdc932af"]}, {"sizeBytes": 739443496, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/autopython35_networkapi@sha256:3f6578aa5338e03926a6dd7245a2762c0a2f08b2013859110325e8e7a7ea9d73"]}, {"sizeBytes": 698958528, "names": ["docker-registry.default.svc:5000/aidablu-test/aida-blu@sha256:4f2443aba2b56cfcbebce95971d45352b44be852148e463aaa9375e820be4ee5"]}, {"sizeBytes": 666631364, "names": ["docker-registry.default.svc:5000/automation-qa/autopython35@sha256:013090eb2bac433e8e46108ed2645a32b8ed3a5518e47e8cd4f85731c30d680e", "docker-registry.default.svc:5000/automation-rick/autopython35@sha256:013090eb2bac433e8e46108ed2645a32b8ed3a5518e47e8cd4f85731c30d680e"]}, {"sizeBytes": 658897636, "names": ["docker-registry.default.svc:5000/automation/autopython35@sha256:2b14c63d79371d131579896b172c132d2eb91c7d7223ced6d4ab0830a928ac5b", "docker-registry.default.svc:5000/automation/autopython35_taggingclient@sha256:2b14c63d79371d131579896b172c132d2eb91c7d7223ced6d4ab0830a928ac5b"]}, {"sizeBytes": 658894431, "names": ["docker-registry.default.svc:5000/automation-rick/autopython35_taggingclient@sha256:9144413627c418210c86fa5f9a767b27536af5e34e9587e68f3ede8ce66ba126", "docker-registry.default.svc:5000/automation/autopython35_taggingclient@sha256:9144413627c418210c86fa5f9a767b27536af5e34e9587e68f3ede8ce66ba126"]}, {"sizeBytes": 658894075, "names": ["docker-registry.default.svc:5000/automation-develop/autopython35@sha256:791facc2583fd807e52019d374e702788db3172f9cd6d51f372137588ea1a356", "docker-registry.default.svc:5000/automation-rick/autopython35@sha256:791facc2583fd807e52019d374e702788db3172f9cd6d51f372137588ea1a356", "docker-registry.default.svc:5000/automation/autopython35@sha256:791facc2583fd807e52019d374e702788db3172f9cd6d51f372137588ea1a356"]}, {"sizeBytes": 652501928, "names": ["docker-registry.default.svc:5000/automation-qa-managed-connectivity/aciapi@sha256:715a80d38b46ecfd478b6451b86d3c9863540fb4d4bc3683068000a68c46443a", "docker-registry.default.svc:5000/automation-qa-managed-connectivity/aciapi:latest"]}, {"sizeBytes": 652445214, "names": ["docker-registry.default.svc:5000/automation-rapp/ftpclient@sha256:f9783f26c50de3dc491e95e82758f1fa40a5c0d775fe9615605f7d19aba1da53", "docker-registry.default.svc:5000/automation-rapp/ftpclient:latest"]}, {"sizeBytes": 652441829, "names": ["docker-registry.default.svc:5000/automation-rapp/dnsclient@sha256:2699c66616907f23246698ad2cdba80e057962cf030f15a15f94cd13ad7b7e81"]}, {"sizeBytes": 644972829, "names": ["docker-registry.default.svc:5000/automation-develop/autopython35_secappservice@sha256:12f46670218eec4e0696cfbd1ceb469cc9be4b66ca64b81cf08758b22992794d", "docker-registry.default.svc:5000/automation-maier/autopython35_secappservice@sha256:12f46670218eec4e0696cfbd1ceb469cc9be4b66ca64b81cf08758b22992794d"]}, {"sizeBytes": 644972829, "names": ["docker-registry.default.svc:5000/automation-develop/autopython35_secappservice@sha256:45e38290baee5be1e0878df7af4f6b95432757f96b568027b25b539dd8b92235", "docker-registry.default.svc:5000/automation-haertenstein/autopython35_secappservice@sha256:45e38290baee5be1e0878df7af4f6b95432757f96b568027b25b539dd8b92235"]}, {"sizeBytes": 644972761, "names": ["docker-registry.default.svc:5000/automation-paul/autopython35_taggingclient@sha256:4d2bc9038e18d8d70eab47972618ca7cb1d48b72e4ead631ad1b363327942587", "docker-registry.default.svc:5000/automation-puscasu/autopython35_taggingclient@sha256:4d2bc9038e18d8d70eab47972618ca7cb1d48b72e4ead631ad1b363327942587"]}, {"sizeBytes": 644972703, "names": ["docker-registry.default.svc:5000/automation-ape/autopython35_secappservice@sha256:922067b2ecaefc9bb73ca3515fb45bf8cea3fc488c93667d7968553a665257dd", "docker-registry.default.svc:5000/automation/autopython35_secappservice@sha256:922067b2ecaefc9bb73ca3515fb45bf8cea3fc488c93667d7968553a665257dd"]}, {"sizeBytes": 644972703, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35_secappservice@sha256:26f904900109f7d62ba78f70fdccdf7ee1b183c2546e53abe378cbc4a822fb57", "docker-registry.default.svc:5000/automation/autopython35_secappservice@sha256:26f904900109f7d62ba78f70fdccdf7ee1b183c2546e53abe378cbc4a822fb57"]}, {"sizeBytes": 644972333, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35@sha256:f17bc436e7176fabda06ce6188d7428d3b99120351c34a218645a6f10a5096cd", "docker-registry.default.svc:5000/automation-prod/autopython35@sha256:f17bc436e7176fabda06ce6188d7428d3b99120351c34a218645a6f10a5096cd"]}, {"sizeBytes": 644972307, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35_taggingclient@sha256:57c0fec6143ed6a4fce180187e68d23e87e284b14dc1720e2016d4517caa00c1", "docker-registry.default.svc:5000/automation/autopython35_taggingclient@sha256:57c0fec6143ed6a4fce180187e68d23e87e284b14dc1720e2016d4517caa00c1"]}, {"sizeBytes": 644972307, "names": ["docker-registry.default.svc:5000/automation-develop/autopython35_taggingclient@sha256:8b5c13d0286910a735b591205093ccbc08c0da0bb8c2313e6236dd535518cf5e", "docker-registry.default.svc:5000/automation-maier/autopython35_taggingclient@sha256:8b5c13d0286910a735b591205093ccbc08c0da0bb8c2313e6236dd535518cf5e"]}, {"sizeBytes": 644972307, "names": ["docker-registry.default.svc:5000/automation-ape/autopython35_taggingclient@sha256:01203b4430d25c73969bf861df59ecbe55dc4c21b4b07a43a28fb6d8c2c9af40", "docker-registry.default.svc:5000/automation-develop/autopython35_taggingclient@sha256:01203b4430d25c73969bf861df59ecbe55dc4c21b4b07a43a28fb6d8c2c9af40"]}, {"sizeBytes": 644972307, "names": ["docker-registry.default.svc:5000/automation-ape/autopython35_taggingclient@sha256:076aaeb48e998c1da755d932a74a1241aa8f5cc97381c764cc51b349fcd0eae1", "docker-registry.default.svc:5000/automation/autopython35_taggingclient@sha256:076aaeb48e998c1da755d932a74a1241aa8f5cc97381c764cc51b349fcd0eae1"]}, {"sizeBytes": 644971951, "names": ["docker-registry.default.svc:5000/automation-ape/autopython35@sha256:b9065e3e3d871799b94b7dad8845b53315fb4158799ad8996a0c34659d0d31dd", "docker-registry.default.svc:5000/automation-develop/autopython35@sha256:b9065e3e3d871799b94b7dad8845b53315fb4158799ad8996a0c34659d0d31dd", "docker-registry.default.svc:5000/automation-maier/autopython35@sha256:b9065e3e3d871799b94b7dad8845b53315fb4158799ad8996a0c34659d0d31dd"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-09-13T23:04:20Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:51:03Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-09-13T23:04:20Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:51:03Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T13:51:39Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:51:03Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "True", "lastTransitionTime": "2019-01-09T14:51:03Z", "reason": "KubeletReady", "lastHeartbeatTime": "2019-01-09T14:51:03Z", "message": "kubelet is posting ready status", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:39:23Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:51:03Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node02.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node02.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node02.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871169", "creationTimestamp": "2018-01-31T13:07:22Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node02.os.ad.scanplus.de", "uid": "ad1fd267-0687-11e8-8e46-005056aa3492"}}]}}\n', '') ok: [sp-os-node02.os.ad.scanplus.de -> sp-os-master01.os.ad.scanplus.de] => { "attempts": 3, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node02.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node02.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-01-31T13:07:22Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node02.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "even", "zone": "RZ-LM07" }, "name": "sp-os-node02.os.ad.scanplus.de", "resourceVersion": "93871169", "selfLink": "/api/v1/nodes/sp-os-node02.os.ad.scanplus.de", "uid": "ad1fd267-0687-11e8-8e46-005056aa3492" }, "spec": { "externalID": "sp-os-node02.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.244", "type": "InternalIP" }, { "address": "sp-os-node02.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:51:03Z", "lastTransitionTime": "2018-09-13T23:04:20Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:51:03Z", "lastTransitionTime": "2018-09-13T23:04:20Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:03Z", "lastTransitionTime": "2019-01-09T13:51:39Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:03Z", "lastTransitionTime": "2019-01-09T14:51:03Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:51:03Z", "lastTransitionTime": "2018-09-13T22:39:23Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "docker-registry.default.svc:5000/automation-prodtest/autopython35_networkapi@sha256:afbdea5ea505f5a1604e9f08a4805bc749c3b3aa6ecc990517c8fce0bbd03423" ], "sizeBytes": 1889747575 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/networkapi@sha256:3b748fdb5c05c4e18109aab2b53327deba8e9d4743becfac875323b2cf51895c", "docker-registry.default.svc:5000/automation-gleim/networkapi:latest" ], "sizeBytes": 1839139266 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/autopython35_networkapi@sha256:c5aada4116d01acaae3c86329a2b1fcf6f0f7c95547e27b485195ced8befbeba" ], "sizeBytes": 1838632466 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:6573ce619dbbb908583af241a3f02b1502a5d586f3b7f49094cf627c6912ba26" ], "sizeBytes": 1838632433 }, { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "docker-registry.default.svc:5000/automation-maier-blu/aidabluworkflows@sha256:2a71d103bcdc159f16a0166f1d81f45507fd1716f20872bc0657b4af53e5377b" ], "sizeBytes": 1238893950 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:cd02dd0022aa91941dba88f0ccc5ddccf54c99756235f478495039b77c23d8cc" ], "sizeBytes": 1168831034 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:478d2772bf6dc0a4f89d498d54779ab63d26849529f44671dceb7bcf79d4bc99", "docker-registry.default.svc:5000/automation-schoenthaler/networkapi:latest" ], "sizeBytes": 1168830389 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/autopython35_taggingclient@sha256:bce082c6697792bf0a51f8c60e3072624c682858ce60cc4b0fe95e0b32dac922", "docker-registry.default.svc:5000/automation-prodtest/autopython35_taggingclient:latest" ], "sizeBytes": 971853973 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/autopython35_taggingclient@sha256:0c4af808fc9ce2eba6e40ddef31e5ceb2021edd72ade164e9c48e70d9b4a0a39" ], "sizeBytes": 971853606 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/automationapi@sha256:49885c95984f8a03f0cb024ae7f15818cce7a7167f1c43039bbf65cf1af0c01d" ], "sizeBytes": 881870541 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/aciapi@sha256:a0656e96f55a27775ef72c7af534244cecacc60dc1c4e7e7a6bfa8857ad65a1f" ], "sizeBytes": 877840275 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/aciapi@sha256:5592eb4f798092f2b8d90b4c2090a06f7c3118867bb3772e3cf14dc87311daef" ], "sizeBytes": 877838781 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/ftpclient@sha256:2d099141371733ec4eab4914d2af86ce2088d5fc601d6738ae0be7c10fb3c901" ], "sizeBytes": 877802186 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/ftpclient@sha256:f5aec16035defd3b78dae96b2a243333d05de3462385d05a9bb55364b6ee1d6f" ], "sizeBytes": 877796412 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/vcenterfileclient@sha256:1ae161d08e1be8d03ef906215042fac845596e345445caf4d638214a139c7a10" ], "sizeBytes": 877794526 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/vcenterfileclient@sha256:d6a637d8e57cc73d7780ae24ea91e78c3acb90b8520c597edd7bc022bdab9d0d" ], "sizeBytes": 877794354 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/ftpclient@sha256:7e3054758eb12cae27e663bd4536148ed0e8bfe13b891f18096fa3240870afda" ], "sizeBytes": 877793199 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/autopython35@sha256:0c71d5251991e6fde82e4fe9d2413ff2f25d092eb2f7aa968f9294dc593d4bcd", "docker-registry.default.svc:5000/automation-maier/autopython35:latest" ], "sizeBytes": 877716514 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/autopython35@sha256:acc862ede3fd33ee14d159244406554f96e1e7c5a4aade11c48b3934e49b8a78" ], "sizeBytes": 877705841 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/dnsclient@sha256:dd3f91b491786c9adb83d7a07508a1ae34d931ae8c4c8bf75101227e2f768eac" ], "sizeBytes": 876895339 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/vcenterfileclient@sha256:8db4855733cbac4bb9ee5b4f64a50b222499f7da0171c3bb0b3b417ad369a092" ], "sizeBytes": 876893313 }, { "names": [ "docker-registry.default.svc:5000/automation-paul/autopython35_networkapi@sha256:5f657c52f7a0e9d460b4623734de381e29ad3a1e116e0ef4e44bbdbc9ffe2802", "docker-registry.default.svc:5000/automation-puscasu/autopython35_networkapi@sha256:5f657c52f7a0e9d460b4623734de381e29ad3a1e116e0ef4e44bbdbc9ffe2802" ], "sizeBytes": 817543822 }, { "names": [ "docker-registry.default.svc:5000/automation-develop/autopython35_networkapi@sha256:bfe73eb38d007ec3566e2f657640c5e85405926baa14dd9afe0f6135a8506c07", "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi@sha256:bfe73eb38d007ec3566e2f657640c5e85405926baa14dd9afe0f6135a8506c07" ], "sizeBytes": 814403688 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe", "docker-registry.default.svc:5000/automation-prod/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe" ], "sizeBytes": 813911535 }, { "names": [ "docker-registry.default.svc:5000/automation-develop/autopython35_networkapi@sha256:f1dc698b6363c074bf39f87b4ef9f2deae24b9861361fefa22c8d465ef264b23", "docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:f1dc698b6363c074bf39f87b4ef9f2deae24b9861361fefa22c8d465ef264b23" ], "sizeBytes": 813911394 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:7e6aa52152ae77df12068d8fc7bd4173df307e46417346b4772c94691ff002a8", "docker-registry.default.svc:5000/automation/autopython35_networkapi@sha256:7e6aa52152ae77df12068d8fc7bd4173df307e46417346b4772c94691ff002a8" ], "sizeBytes": 813910911 }, { "names": [ "docker-registry.default.svc:5000/automation-ape/autopython35_networkapi@sha256:99c8538f0559e1e3c70a1b84d9850eb1087af7e061d2480364fda0794f522a15", "docker-registry.default.svc:5000/automation-develop/autopython35_networkapi@sha256:99c8538f0559e1e3c70a1b84d9850eb1087af7e061d2480364fda0794f522a15" ], "sizeBytes": 813873837 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35_sshclient@sha256:8bac746af9fd859d4ac8a47b58fcbc6a4a614e420d8b28b3d5984378cdc932af", "docker-registry.default.svc:5000/automation-prod/autopython35_sshclient@sha256:8bac746af9fd859d4ac8a47b58fcbc6a4a614e420d8b28b3d5984378cdc932af" ], "sizeBytes": 810375991 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/autopython35_networkapi@sha256:3f6578aa5338e03926a6dd7245a2762c0a2f08b2013859110325e8e7a7ea9d73" ], "sizeBytes": 739443496 }, { "names": [ "docker-registry.default.svc:5000/aidablu-test/aida-blu@sha256:4f2443aba2b56cfcbebce95971d45352b44be852148e463aaa9375e820be4ee5" ], "sizeBytes": 698958528 }, { "names": [ "docker-registry.default.svc:5000/automation-qa/autopython35@sha256:013090eb2bac433e8e46108ed2645a32b8ed3a5518e47e8cd4f85731c30d680e", "docker-registry.default.svc:5000/automation-rick/autopython35@sha256:013090eb2bac433e8e46108ed2645a32b8ed3a5518e47e8cd4f85731c30d680e" ], "sizeBytes": 666631364 }, { "names": [ "docker-registry.default.svc:5000/automation/autopython35@sha256:2b14c63d79371d131579896b172c132d2eb91c7d7223ced6d4ab0830a928ac5b", "docker-registry.default.svc:5000/automation/autopython35_taggingclient@sha256:2b14c63d79371d131579896b172c132d2eb91c7d7223ced6d4ab0830a928ac5b" ], "sizeBytes": 658897636 }, { "names": [ "docker-registry.default.svc:5000/automation-rick/autopython35_taggingclient@sha256:9144413627c418210c86fa5f9a767b27536af5e34e9587e68f3ede8ce66ba126", "docker-registry.default.svc:5000/automation/autopython35_taggingclient@sha256:9144413627c418210c86fa5f9a767b27536af5e34e9587e68f3ede8ce66ba126" ], "sizeBytes": 658894431 }, { "names": [ "docker-registry.default.svc:5000/automation-develop/autopython35@sha256:791facc2583fd807e52019d374e702788db3172f9cd6d51f372137588ea1a356", "docker-registry.default.svc:5000/automation-rick/autopython35@sha256:791facc2583fd807e52019d374e702788db3172f9cd6d51f372137588ea1a356", "docker-registry.default.svc:5000/automation/autopython35@sha256:791facc2583fd807e52019d374e702788db3172f9cd6d51f372137588ea1a356" ], "sizeBytes": 658894075 }, { "names": [ "docker-registry.default.svc:5000/automation-qa-managed-connectivity/aciapi@sha256:715a80d38b46ecfd478b6451b86d3c9863540fb4d4bc3683068000a68c46443a", "docker-registry.default.svc:5000/automation-qa-managed-connectivity/aciapi:latest" ], "sizeBytes": 652501928 }, { "names": [ "docker-registry.default.svc:5000/automation-rapp/ftpclient@sha256:f9783f26c50de3dc491e95e82758f1fa40a5c0d775fe9615605f7d19aba1da53", "docker-registry.default.svc:5000/automation-rapp/ftpclient:latest" ], "sizeBytes": 652445214 }, { "names": [ "docker-registry.default.svc:5000/automation-rapp/dnsclient@sha256:2699c66616907f23246698ad2cdba80e057962cf030f15a15f94cd13ad7b7e81" ], "sizeBytes": 652441829 }, { "names": [ "docker-registry.default.svc:5000/automation-develop/autopython35_secappservice@sha256:12f46670218eec4e0696cfbd1ceb469cc9be4b66ca64b81cf08758b22992794d", "docker-registry.default.svc:5000/automation-maier/autopython35_secappservice@sha256:12f46670218eec4e0696cfbd1ceb469cc9be4b66ca64b81cf08758b22992794d" ], "sizeBytes": 644972829 }, { "names": [ "docker-registry.default.svc:5000/automation-develop/autopython35_secappservice@sha256:45e38290baee5be1e0878df7af4f6b95432757f96b568027b25b539dd8b92235", "docker-registry.default.svc:5000/automation-haertenstein/autopython35_secappservice@sha256:45e38290baee5be1e0878df7af4f6b95432757f96b568027b25b539dd8b92235" ], "sizeBytes": 644972829 }, { "names": [ "docker-registry.default.svc:5000/automation-paul/autopython35_taggingclient@sha256:4d2bc9038e18d8d70eab47972618ca7cb1d48b72e4ead631ad1b363327942587", "docker-registry.default.svc:5000/automation-puscasu/autopython35_taggingclient@sha256:4d2bc9038e18d8d70eab47972618ca7cb1d48b72e4ead631ad1b363327942587" ], "sizeBytes": 644972761 }, { "names": [ "docker-registry.default.svc:5000/automation-ape/autopython35_secappservice@sha256:922067b2ecaefc9bb73ca3515fb45bf8cea3fc488c93667d7968553a665257dd", "docker-registry.default.svc:5000/automation/autopython35_secappservice@sha256:922067b2ecaefc9bb73ca3515fb45bf8cea3fc488c93667d7968553a665257dd" ], "sizeBytes": 644972703 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35_secappservice@sha256:26f904900109f7d62ba78f70fdccdf7ee1b183c2546e53abe378cbc4a822fb57", "docker-registry.default.svc:5000/automation/autopython35_secappservice@sha256:26f904900109f7d62ba78f70fdccdf7ee1b183c2546e53abe378cbc4a822fb57" ], "sizeBytes": 644972703 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35@sha256:f17bc436e7176fabda06ce6188d7428d3b99120351c34a218645a6f10a5096cd", "docker-registry.default.svc:5000/automation-prod/autopython35@sha256:f17bc436e7176fabda06ce6188d7428d3b99120351c34a218645a6f10a5096cd" ], "sizeBytes": 644972333 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35_taggingclient@sha256:57c0fec6143ed6a4fce180187e68d23e87e284b14dc1720e2016d4517caa00c1", "docker-registry.default.svc:5000/automation/autopython35_taggingclient@sha256:57c0fec6143ed6a4fce180187e68d23e87e284b14dc1720e2016d4517caa00c1" ], "sizeBytes": 644972307 }, { "names": [ "docker-registry.default.svc:5000/automation-develop/autopython35_taggingclient@sha256:8b5c13d0286910a735b591205093ccbc08c0da0bb8c2313e6236dd535518cf5e", "docker-registry.default.svc:5000/automation-maier/autopython35_taggingclient@sha256:8b5c13d0286910a735b591205093ccbc08c0da0bb8c2313e6236dd535518cf5e" ], "sizeBytes": 644972307 }, { "names": [ "docker-registry.default.svc:5000/automation-ape/autopython35_taggingclient@sha256:01203b4430d25c73969bf861df59ecbe55dc4c21b4b07a43a28fb6d8c2c9af40", "docker-registry.default.svc:5000/automation-develop/autopython35_taggingclient@sha256:01203b4430d25c73969bf861df59ecbe55dc4c21b4b07a43a28fb6d8c2c9af40" ], "sizeBytes": 644972307 }, { "names": [ "docker-registry.default.svc:5000/automation-ape/autopython35_taggingclient@sha256:076aaeb48e998c1da755d932a74a1241aa8f5cc97381c764cc51b349fcd0eae1", "docker-registry.default.svc:5000/automation/autopython35_taggingclient@sha256:076aaeb48e998c1da755d932a74a1241aa8f5cc97381c764cc51b349fcd0eae1" ], "sizeBytes": 644972307 }, { "names": [ "docker-registry.default.svc:5000/automation-ape/autopython35@sha256:b9065e3e3d871799b94b7dad8845b53315fb4158799ad8996a0c34659d0d31dd", "docker-registry.default.svc:5000/automation-develop/autopython35@sha256:b9065e3e3d871799b94b7dad8845b53315fb4158799ad8996a0c34659d0d31dd", "docker-registry.default.svc:5000/automation-maier/autopython35@sha256:b9065e3e3d871799b94b7dad8845b53315fb4158799ad8996a0c34659d0d31dd" ], "sizeBytes": 644971951 } ], "nodeInfo": { "architecture": "amd64", "bootID": "b7737317-0428-4a33-b663-e2587a5ef96f", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422A8A5B-A1AA-8149-4125-3576F319A9BB" } } } ], "returncode": 0 }, "state": "list" } META: ran handlers META: ran handlers PLAY [Restart nodes] ******************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [restart node] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:32 Wednesday 09 January 2019 15:51:04 +0100 (0:00:11.491) 0:11:39.077 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node03.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TimeoutStopUSec": "1min 30s", "ControlGroup": "/system.slice/atomic-openshift-node.service", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ActiveExitTimestamp": "Wed 2019-01-09 14:51:51 CET", "ExecMainCode": "0", "UnitFileState": "enabled", "ExecMainPID": "99498", "LimitSIGPENDING": "63379", "FileDescriptorStoreMax": "0", "LoadState": "loaded", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "3475556925155", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "3475556925224", "StandardError": "inherit", "AssertTimestamp": "Wed 2019-01-09 14:51:51 CET", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "InactiveExitTimestamp": "Wed 2019-01-09 14:51:51 CET", "WatchdogTimestamp": "Wed 2019-01-09 14:51:52 CET", "NoNewPrivileges": "no", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "Before": "multi-user.target shutdown.target", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "3475556291896", "SendSIGHUP": "no", "TimeoutStartUSec": "5min", "Type": "notify", "SyslogPriority": "30", "SameProcessGroup": "no", "MountFlags": "0", "LimitNPROC": "63379", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "ExecMainStartTimestamp": "Wed 2019-01-09 14:51:51 CET", "SyslogIdentifier": "atomic-openshift-node", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "-999", "Documentation": "https://github.com/openshift/origin", "StartLimitBurst": "5", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "SecureBits": "0", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "running", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestampMonotonic": "3475556288570", "MainPID": "99498", "StartupBlockIOWeight": "18446744073709551615", "ActiveEnterTimestamp": "Wed 2019-01-09 14:51:52 CET", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "active", "Nice": "0", "LimitDATA": "18446744073709551615", "UnitFilePreset": "disabled", "MemoryCurrent": "109658112", "LimitRTTIME": "18446744073709551615", "WantedBy": "multi-user.target", "TasksCurrent": "18446744073709551615", "RestartUSec": "5s", "ConditionTimestamp": "Wed 2019-01-09 14:51:51 CET", "CPUAccounting": "yes", "RemainAfterExit": "no", "RequiresMountsFor": "/var/lib/origin", "PrivateNetwork": "no", "Restart": "always", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "RefuseManualStop": "no", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "InactiveEnterTimestamp": "Wed 2019-01-09 14:51:51 CET", "StandardInput": "null", "AssertTimestampMonotonic": "3475556288571", "DefaultDependencies": "yes", "Requires": "var.mount -.mount basic.target", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "Slice": "system.slice", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "main", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "PrivateTmp": "no", "OnFailureJobMode": "replace", "AssertResult": "yes", "LimitLOCKS": "18446744073709551615", "ExecMainStartTimestampMonotonic": "3475556291774", "AllowIsolate": "no", "Wants": "system.slice dnsmasq.service docker.service", "After": "docker.service dnsmasq.service chronyd.service systemd-journald.socket basic.target system.slice ntpd.service -.mount var.mount", "FailureAction": "none", "CanIsolate": "no", "Conflicts": "shutdown.target", "StandardOutput": "journal", "WorkingDirectory": "/var/lib/origin", "InactiveEnterTimestampMonotonic": "3475556275203", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "Transient": "no", "IOScheduling": "0", "Description": "OpenShift Node", "ActiveExitTimestampMonotonic": "3475556261290", "CanReload": "no", "ControlPID": "0", "LimitNICE": "0", "BlockIOWeight": "18446744073709551615", "Names": "atomic-openshift-node.service", "ProtectSystem": "no", "PrivateDevices": "no", "Id": "atomic-openshift-node.service"}, "invocation": {"module_args": {"daemon-reload": true, "force": null, "name": "atomic-openshift-node", "enabled": null, "daemon_reload": true, "state": "restarted", "no_block": false, "user": false, "masked": null}}, "state": "started", "changed": true, "name": "atomic-openshift-node"}\n', '') changed: [sp-os-node03.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "daemon-reload": true, "daemon_reload": true, "enabled": null, "force": null, "masked": null, "name": "atomic-openshift-node", "no_block": false, "state": "restarted", "user": false } }, "name": "atomic-openshift-node", "state": "started", "status": { "ActiveEnterTimestamp": "Wed 2019-01-09 14:51:52 CET", "ActiveEnterTimestampMonotonic": "3475556925224", "ActiveExitTimestamp": "Wed 2019-01-09 14:51:51 CET", "ActiveExitTimestampMonotonic": "3475556261290", "ActiveState": "active", "After": "docker.service dnsmasq.service chronyd.service systemd-journald.socket basic.target system.slice ntpd.service -.mount var.mount", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-01-09 14:51:51 CET", "AssertTimestampMonotonic": "3475556288571", "Before": "multi-user.target shutdown.target", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-01-09 14:51:51 CET", "ConditionTimestampMonotonic": "3475556288570", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/atomic-openshift-node.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "OpenShift Node", "DevicePolicy": "auto", "Documentation": "https://github.com/openshift/origin", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "99498", "ExecMainStartTimestamp": "Wed 2019-01-09 14:51:51 CET", "ExecMainStartTimestampMonotonic": "3475556291774", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "atomic-openshift-node.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-01-09 14:51:51 CET", "InactiveEnterTimestampMonotonic": "3475556275203", "InactiveExitTimestamp": "Wed 2019-01-09 14:51:51 CET", "InactiveExitTimestampMonotonic": "3475556291896", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "63379", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "63379", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "99498", "MemoryAccounting": "yes", "MemoryCurrent": "109658112", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "atomic-openshift-node.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "-999", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "var.mount -.mount basic.target", "RequiresMountsFor": "/var/lib/origin", "Restart": "always", "RestartUSec": "5s", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogIdentifier": "atomic-openshift-node", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "system.slice dnsmasq.service docker.service", "WatchdogTimestamp": "Wed 2019-01-09 14:51:52 CET", "WatchdogTimestampMonotonic": "3475556925155", "WatchdogUSec": "0", "WorkingDirectory": "/var/lib/origin" } } TASK [Wait for node to be ready] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:38 Wednesday 09 January 2019 15:51:06 +0100 (0:00:01.676) 0:11:40.753 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node03.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node03.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249840Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.233"}, {"type": "Hostname", "address": "sp-os-node03.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "b5b6f8ae-b68b-426e-a649-47037eb36878", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422AE278-B4A6-EC38-75F2-6D7816838230", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147440Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-11-30T08:26:30Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:51:06Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-30T08:26:30Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:51:06Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T10:07:37Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:51:06Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:51:06Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:51:06Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T21:28:51Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:51:06Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node03.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node03.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node03.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871185", "creationTimestamp": "2018-03-14T15:52:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node03.os.ad.scanplus.de", "uid": "a57f8c2c-279f-11e8-aab3-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (36 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node03.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node03.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-03-14T15:52:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node03.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "odd", "zone": "RZ-LM07" }, "name": "sp-os-node03.os.ad.scanplus.de", "resourceVersion": "93871185", "selfLink": "/api/v1/nodes/sp-os-node03.os.ad.scanplus.de", "uid": "a57f8c2c-279f-11e8-aab3-005056aa3492" }, "spec": { "externalID": "sp-os-node03.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.233", "type": "InternalIP" }, { "address": "sp-os-node03.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147440Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249840Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:51:06Z", "lastTransitionTime": "2018-11-30T08:26:30Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:51:06Z", "lastTransitionTime": "2018-11-30T08:26:30Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:06Z", "lastTransitionTime": "2019-01-09T10:07:37Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:06Z", "lastTransitionTime": "2019-01-09T14:51:06Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:51:06Z", "lastTransitionTime": "2018-09-13T21:28:51Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "b5b6f8ae-b68b-426e-a649-47037eb36878", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422AE278-B4A6-EC38-75F2-6D7816838230" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node03.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node03.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249840Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.233"}, {"type": "Hostname", "address": "sp-os-node03.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "b5b6f8ae-b68b-426e-a649-47037eb36878", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422AE278-B4A6-EC38-75F2-6D7816838230", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147440Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-11-30T08:26:30Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:51:06Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-30T08:26:30Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:51:06Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T10:07:37Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:51:06Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:51:06Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:51:06Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T21:28:51Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:51:06Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node03.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node03.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node03.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871185", "creationTimestamp": "2018-03-14T15:52:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node03.os.ad.scanplus.de", "uid": "a57f8c2c-279f-11e8-aab3-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (35 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node03.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node03.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-03-14T15:52:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node03.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "odd", "zone": "RZ-LM07" }, "name": "sp-os-node03.os.ad.scanplus.de", "resourceVersion": "93871185", "selfLink": "/api/v1/nodes/sp-os-node03.os.ad.scanplus.de", "uid": "a57f8c2c-279f-11e8-aab3-005056aa3492" }, "spec": { "externalID": "sp-os-node03.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.233", "type": "InternalIP" }, { "address": "sp-os-node03.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147440Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249840Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:51:06Z", "lastTransitionTime": "2018-11-30T08:26:30Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:51:06Z", "lastTransitionTime": "2018-11-30T08:26:30Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:06Z", "lastTransitionTime": "2019-01-09T10:07:37Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:06Z", "lastTransitionTime": "2019-01-09T14:51:06Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:51:06Z", "lastTransitionTime": "2018-09-13T21:28:51Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "b5b6f8ae-b68b-426e-a649-47037eb36878", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422AE278-B4A6-EC38-75F2-6D7816838230" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node03.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node03.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249840Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.80.233"}, {"type": "Hostname", "address": "sp-os-node03.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "b5b6f8ae-b68b-426e-a649-47037eb36878", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422AE278-B4A6-EC38-75F2-6D7816838230", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147440Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1862991681, "names": ["docker-registry.default.svc:5000/automation-maier/autopython35_networkapi@sha256:0e1ea228d2ffaeae7152c69b5ceaf6c3a32e58462ec79874613ddf816fcc0251"]}, {"sizeBytes": 1855626343, "names": ["docker-registry.default.svc:5000/automation-haertenstein/networkapi@sha256:d196b531f8e023fcd1a588a89099f1deb270dc59bf2da6b4ec2768101dc3fdc4", "docker-registry.default.svc:5000/automation-haertenstein/networkapi:latest"]}, {"sizeBytes": 1838632433, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:6573ce619dbbb908583af241a3f02b1502a5d586f3b7f49094cf627c6912ba26"]}, {"sizeBytes": 1838139141, "names": ["docker-registry.default.svc:5000/automation-rick/networkapi@sha256:24d4b27a2e41b8c5dedca15b306a4feaeaeba3b8e8172328edd154971b3d863e", "docker-registry.default.svc:5000/automation-rick/networkapi:latest"]}, {"sizeBytes": 1838041262, "names": ["docker-registry.default.svc:5000/automation-ziesel/networkapi@sha256:8e16d7b9730410e81b781b2aadd5424e23d47f98248f4ccc5fb668e4e57090ea"]}, {"sizeBytes": 1837643980, "names": ["docker-registry.default.svc:5000/automation-rick/autopython35_networkapi@sha256:e0e894c97f6c27d7ef2c735495b3489ab003f7b8301013b7292981e27c0da8d8", "docker-registry.default.svc:5000/automation-rick/autopython35_networkapi:latest"]}, {"sizeBytes": 1318074697, "names": ["docker-registry.default.svc:5000/automation-qa-service-definitions-blu/aidabluworkflows@sha256:f2823d66f8bfbcdc20f86d46f6d10339ae23b1c3f0b5f7d84da85dbe82997c7f", "docker-registry.default.svc:5000/automation-qa-service-definitions-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1247416602, "names": ["docker-registry.default.svc:5000/aidablu-test/aidabluworkflows@sha256:b9de3318443ff53ac835fd0dbe48f940359e135f62b4c183051d0cab23472cd5"]}, {"sizeBytes": 1237095818, "names": ["docker-registry.default.svc:5000/automation-blu-qa-managed-connectivity/aidabluworkflows@sha256:0711f196c373b1d1b2685156da70d96b69eaef432736c80c03243947a0046a8b", "docker-registry.default.svc:5000/automation-blu-qa-managed-connectivity/aidabluworkflows:latest"]}, {"sizeBytes": 1196488450, "names": ["registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0"]}, {"sizeBytes": 1169385027, "names": ["docker-registry.default.svc:5000/automation-prod/networkapi@sha256:8a6b7b7ce89c5442370f95b5c4d511d632bb28d3fde98713c3e50b6d5f928143"]}, {"sizeBytes": 1169123707, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:ae941eda7d7033e89cdea7f94610eb6148207d8a6168208bb6ae253ca6659d89"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 1168830720, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:ba4324c58bfefdc84b448e1fdd188d40af887681d62c35a57b8bc3d76d0ce398"]}, {"sizeBytes": 1168830388, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:f11827b5616deb91adc568a71de9778da82f2d7090d3676c76b39a25742a22dc", "docker-registry.default.svc:5000/automation-schoenthaler/networkapi:latest"]}, {"sizeBytes": 1168830310, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:d69bf64917cf1a37626f7d5ef8377f9c442cdccc225d369ffcc75f140f6b18b8"]}, {"sizeBytes": 1168830307, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:42dcde1f18de3f984a0503e293978514acecad998a9c934004c32e2576f7996a"]}, {"sizeBytes": 1168826441, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:df3f80df3daf06c5d01fec49204fd8e781ca30a4446d2edf9fbe2a516ce84710"]}, {"sizeBytes": 1168826440, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:04a5d627f9dfb6260470f43f77f66ea2b663b04a87c56fb7f673e5d88eba2823"]}, {"sizeBytes": 1168826440, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:6e744b9b4440a86709f941944d0e4e3c8fd70e5a39bbd51c9994f772b166d894"]}, {"sizeBytes": 1054898910, "names": ["docker-registry.default.svc:5000/automation-puscasu/sshclient@sha256:b0d4d1816b1fa65bdccb65446202e10d8a21466952ae0aba07f50fd18063df2d", "docker-registry.default.svc:5000/automation-puscasu/sshclient:latest"]}, {"sizeBytes": 1054887789, "names": ["docker-registry.default.svc:5000/automation-ziesel/autopython35_sshclient@sha256:08a91392fc8adc31522b54fb4b5a6b6f48865133e1b4d8603329b7aa3d8bf266"]}, {"sizeBytes": 1054767277, "names": ["docker-registry.default.svc:5000/automation-puscasu/autopython35_sshclient@sha256:342f5b8b6bcd7ed23046805c2e2c249fe9cbf1f2f67a5372cb6329f77cd27ef5"]}, {"sizeBytes": 1054444271, "names": ["docker-registry.default.svc:5000/automation-maier/autopython35_sshclient@sha256:c845710fadb7103cf131eb02b9c5727ceb205e6c384c6065505e3d383090e6bd", "docker-registry.default.svc:5000/automation-maier/autopython35_sshclient:latest"]}, {"sizeBytes": 982648721, "names": ["docker-registry.default.svc:5000/epaperupdater-prod/exchangetoeink@sha256:16b24fdc6674fda6affdab441dc13d3c885f42118c0aa5eb2af28396ad651a2d"]}, {"sizeBytes": 972013853, "names": ["docker-registry.default.svc:5000/automation-prodtest/taggingclient@sha256:a381f23e6b0d77be8befcaa2cd7e14358504e47705144d6857acfcdcb5d385c5"]}, {"sizeBytes": 972013263, "names": ["docker-registry.default.svc:5000/automation-gleim/taggingclient@sha256:8ede292a0150c58e41f06e06bb155c88ab249e78836e543887363c6a85adb735"]}, {"sizeBytes": 928325712, "names": ["docker-registry.default.svc:5000/automation-prodtest/aciapi@sha256:5c6b7ef37b7c29c5d01c554999b31e8dd4036c825e931efdb0947248591df4da"]}, {"sizeBytes": 928278636, "names": ["docker-registry.default.svc:5000/automation-prodtest/ftpclient@sha256:3769b9153b3143691b5d6354b96ca0fb35ac84eec9f79df2b862a1ff89414d6c"]}, {"sizeBytes": 928278127, "names": ["docker-registry.default.svc:5000/automation-prodtest/vcenterfileclient@sha256:680a2da0a95f171077b8e96b7ab4e2d57072dfb55adc920dad7cc784f8409e06"]}, {"sizeBytes": 881918254, "names": ["docker-registry.default.svc:5000/automation-prodtest/automationapi@sha256:f3ca61e91b3d6d73a19c61ea0d3a1f927196116da994f900a3462a216175f924"]}, {"sizeBytes": 877838184, "names": ["docker-registry.default.svc:5000/automation-gleim/aciapi@sha256:76b14c9a044277a1a13fdf00717ee8b89be28f3b6ad698c4784134eb317d53b3", "docker-registry.default.svc:5000/automation-gleim/aciapi:latest"]}, {"sizeBytes": 877796666, "names": ["docker-registry.default.svc:5000/automation-prodtest/ftpclient@sha256:1c37b83ddcc56e2308d28a7c66043032afd158be16f19c18002a355283cb9615", "docker-registry.default.svc:5000/automation-prodtest/ftpclient:latest"]}, {"sizeBytes": 877796072, "names": ["docker-registry.default.svc:5000/automation-gleim/ftpclient@sha256:7e8b384c929d652392da54d6202cfb47c05546f0d7514e34706be1c65c2346a0", "docker-registry.default.svc:5000/automation-gleim/ftpclient:latest"]}, {"sizeBytes": 877795112, "names": ["docker-registry.default.svc:5000/automation-prodtest/vcenterfileclient@sha256:f436f8c61843da62e6a657e6136343d0dec4d14cfd6c900745b0438d73bf58b3"]}, {"sizeBytes": 877794715, "names": ["docker-registry.default.svc:5000/automation-haertenstein/dnsclient@sha256:d532fb313412db2ee38f4349342743037dcecb1f9a88e95cbe69475592009400"]}, {"sizeBytes": 877794354, "names": ["docker-registry.default.svc:5000/automation-prod/vcenterfileclient@sha256:d6a637d8e57cc73d7780ae24ea91e78c3acb90b8520c597edd7bc022bdab9d0d", "docker-registry.default.svc:5000/automation-prod/vcenterfileclient:latest"]}, {"sizeBytes": 877705841, "names": ["docker-registry.default.svc:5000/automation-prodtest/autopython35@sha256:acc862ede3fd33ee14d159244406554f96e1e7c5a4aade11c48b3934e49b8a78"]}, {"sizeBytes": 877705474, "names": ["docker-registry.default.svc:5000/automation-ziesel/autopython35@sha256:05eeb14f717c95193452f75a622fe6fc2c72a3bd3a507ef0a4dbbce7b2aaf39d"]}, {"sizeBytes": 877705134, "names": ["docker-registry.default.svc:5000/automation-gleim/autopython35@sha256:e1bd9cf42349635d48a0b536f90d2159f51aece7fbc5ea68a398d49df5a14051"]}, {"sizeBytes": 877705083, "names": ["docker-registry.default.svc:5000/automation-prod/autopython35@sha256:976d3441e49ff4ec24df483813259aae8024fbba5c7c2826fe0db0a50caaa443"]}, {"sizeBytes": 877174269, "names": ["docker-registry.default.svc:5000/automation-rick/vcenterfileclient@sha256:c583c4e8af3183bbfb9dfa7840aca33d455ca2e41d79af3d71a1a904b3fb2a67"]}, {"sizeBytes": 877089035, "names": ["docker-registry.default.svc:5000/automation-rick/autopython35@sha256:1cb098cb5dabb124b9c6790178a4d78916c55a5fba3fef20d0b0c13f05fecdb3"]}, {"sizeBytes": 877019958, "names": ["docker-registry.default.svc:5000/automation-ziesel/autopython35@sha256:9f8d828e1038702124d067e0f34d1770686bfd33733d40129d407b01a2a3d501"]}, {"sizeBytes": 876893822, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/ftpclient@sha256:d7c83b0067b8fd178f2b5d3fb49145a910dd22dfb238424ecd2a2c9009fd5f6d"]}, {"sizeBytes": 876797832, "names": ["docker-registry.default.svc:5000/automation-maier/autopython35@sha256:6a9d3d076ef3ac87d510886add4b7fbdcfb0fee1d95f97389fea5d316b66abd5"]}, {"sizeBytes": 814403688, "names": ["docker-registry.default.svc:5000/automation-develop/autopython35_networkapi@sha256:bfe73eb38d007ec3566e2f657640c5e85405926baa14dd9afe0f6135a8506c07", "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi@sha256:bfe73eb38d007ec3566e2f657640c5e85405926baa14dd9afe0f6135a8506c07"]}, {"sizeBytes": 813911535, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe", "docker-registry.default.svc:5000/automation-prod/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe"]}, {"sizeBytes": 813911394, "names": ["docker-registry.default.svc:5000/automation-develop/autopython35_networkapi@sha256:f1dc698b6363c074bf39f87b4ef9f2deae24b9861361fefa22c8d465ef264b23", "docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:f1dc698b6363c074bf39f87b4ef9f2deae24b9861361fefa22c8d465ef264b23"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-11-30T08:26:30Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:51:16Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-30T08:26:30Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:51:16Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T10:07:37Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:51:16Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "True", "lastTransitionTime": "2019-01-09T14:51:16Z", "reason": "KubeletReady", "lastHeartbeatTime": "2019-01-09T14:51:16Z", "message": "kubelet is posting ready status", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T21:28:51Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:51:16Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node03.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node03.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node03.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871229", "creationTimestamp": "2018-03-14T15:52:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node03.os.ad.scanplus.de", "uid": "a57f8c2c-279f-11e8-aab3-005056aa3492"}}]}}\n', '') ok: [sp-os-node03.os.ad.scanplus.de -> sp-os-master01.os.ad.scanplus.de] => { "attempts": 3, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node03.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node03.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-03-14T15:52:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node03.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "odd", "zone": "RZ-LM07" }, "name": "sp-os-node03.os.ad.scanplus.de", "resourceVersion": "93871229", "selfLink": "/api/v1/nodes/sp-os-node03.os.ad.scanplus.de", "uid": "a57f8c2c-279f-11e8-aab3-005056aa3492" }, "spec": { "externalID": "sp-os-node03.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.80.233", "type": "InternalIP" }, { "address": "sp-os-node03.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147440Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249840Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:51:16Z", "lastTransitionTime": "2018-11-30T08:26:30Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:51:16Z", "lastTransitionTime": "2018-11-30T08:26:30Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:16Z", "lastTransitionTime": "2019-01-09T10:07:37Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:16Z", "lastTransitionTime": "2019-01-09T14:51:16Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:51:16Z", "lastTransitionTime": "2018-09-13T21:28:51Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi@sha256:0e1ea228d2ffaeae7152c69b5ceaf6c3a32e58462ec79874613ddf816fcc0251" ], "sizeBytes": 1862991681 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/networkapi@sha256:d196b531f8e023fcd1a588a89099f1deb270dc59bf2da6b4ec2768101dc3fdc4", "docker-registry.default.svc:5000/automation-haertenstein/networkapi:latest" ], "sizeBytes": 1855626343 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:6573ce619dbbb908583af241a3f02b1502a5d586f3b7f49094cf627c6912ba26" ], "sizeBytes": 1838632433 }, { "names": [ "docker-registry.default.svc:5000/automation-rick/networkapi@sha256:24d4b27a2e41b8c5dedca15b306a4feaeaeba3b8e8172328edd154971b3d863e", "docker-registry.default.svc:5000/automation-rick/networkapi:latest" ], "sizeBytes": 1838139141 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/networkapi@sha256:8e16d7b9730410e81b781b2aadd5424e23d47f98248f4ccc5fb668e4e57090ea" ], "sizeBytes": 1838041262 }, { "names": [ "docker-registry.default.svc:5000/automation-rick/autopython35_networkapi@sha256:e0e894c97f6c27d7ef2c735495b3489ab003f7b8301013b7292981e27c0da8d8", "docker-registry.default.svc:5000/automation-rick/autopython35_networkapi:latest" ], "sizeBytes": 1837643980 }, { "names": [ "docker-registry.default.svc:5000/automation-qa-service-definitions-blu/aidabluworkflows@sha256:f2823d66f8bfbcdc20f86d46f6d10339ae23b1c3f0b5f7d84da85dbe82997c7f", "docker-registry.default.svc:5000/automation-qa-service-definitions-blu/aidabluworkflows:latest" ], "sizeBytes": 1318074697 }, { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "docker-registry.default.svc:5000/aidablu-test/aidabluworkflows@sha256:b9de3318443ff53ac835fd0dbe48f940359e135f62b4c183051d0cab23472cd5" ], "sizeBytes": 1247416602 }, { "names": [ "docker-registry.default.svc:5000/automation-blu-qa-managed-connectivity/aidabluworkflows@sha256:0711f196c373b1d1b2685156da70d96b69eaef432736c80c03243947a0046a8b", "docker-registry.default.svc:5000/automation-blu-qa-managed-connectivity/aidabluworkflows:latest" ], "sizeBytes": 1237095818 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0" ], "sizeBytes": 1196488450 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/networkapi@sha256:8a6b7b7ce89c5442370f95b5c4d511d632bb28d3fde98713c3e50b6d5f928143" ], "sizeBytes": 1169385027 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:ae941eda7d7033e89cdea7f94610eb6148207d8a6168208bb6ae253ca6659d89" ], "sizeBytes": 1169123707 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:ba4324c58bfefdc84b448e1fdd188d40af887681d62c35a57b8bc3d76d0ce398" ], "sizeBytes": 1168830720 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:f11827b5616deb91adc568a71de9778da82f2d7090d3676c76b39a25742a22dc", "docker-registry.default.svc:5000/automation-schoenthaler/networkapi:latest" ], "sizeBytes": 1168830388 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:d69bf64917cf1a37626f7d5ef8377f9c442cdccc225d369ffcc75f140f6b18b8" ], "sizeBytes": 1168830310 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:42dcde1f18de3f984a0503e293978514acecad998a9c934004c32e2576f7996a" ], "sizeBytes": 1168830307 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:df3f80df3daf06c5d01fec49204fd8e781ca30a4446d2edf9fbe2a516ce84710" ], "sizeBytes": 1168826441 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:04a5d627f9dfb6260470f43f77f66ea2b663b04a87c56fb7f673e5d88eba2823" ], "sizeBytes": 1168826440 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:6e744b9b4440a86709f941944d0e4e3c8fd70e5a39bbd51c9994f772b166d894" ], "sizeBytes": 1168826440 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/sshclient@sha256:b0d4d1816b1fa65bdccb65446202e10d8a21466952ae0aba07f50fd18063df2d", "docker-registry.default.svc:5000/automation-puscasu/sshclient:latest" ], "sizeBytes": 1054898910 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/autopython35_sshclient@sha256:08a91392fc8adc31522b54fb4b5a6b6f48865133e1b4d8603329b7aa3d8bf266" ], "sizeBytes": 1054887789 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/autopython35_sshclient@sha256:342f5b8b6bcd7ed23046805c2e2c249fe9cbf1f2f67a5372cb6329f77cd27ef5" ], "sizeBytes": 1054767277 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/autopython35_sshclient@sha256:c845710fadb7103cf131eb02b9c5727ceb205e6c384c6065505e3d383090e6bd", "docker-registry.default.svc:5000/automation-maier/autopython35_sshclient:latest" ], "sizeBytes": 1054444271 }, { "names": [ "docker-registry.default.svc:5000/epaperupdater-prod/exchangetoeink@sha256:16b24fdc6674fda6affdab441dc13d3c885f42118c0aa5eb2af28396ad651a2d" ], "sizeBytes": 982648721 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/taggingclient@sha256:a381f23e6b0d77be8befcaa2cd7e14358504e47705144d6857acfcdcb5d385c5" ], "sizeBytes": 972013853 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/taggingclient@sha256:8ede292a0150c58e41f06e06bb155c88ab249e78836e543887363c6a85adb735" ], "sizeBytes": 972013263 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/aciapi@sha256:5c6b7ef37b7c29c5d01c554999b31e8dd4036c825e931efdb0947248591df4da" ], "sizeBytes": 928325712 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/ftpclient@sha256:3769b9153b3143691b5d6354b96ca0fb35ac84eec9f79df2b862a1ff89414d6c" ], "sizeBytes": 928278636 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/vcenterfileclient@sha256:680a2da0a95f171077b8e96b7ab4e2d57072dfb55adc920dad7cc784f8409e06" ], "sizeBytes": 928278127 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/automationapi@sha256:f3ca61e91b3d6d73a19c61ea0d3a1f927196116da994f900a3462a216175f924" ], "sizeBytes": 881918254 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/aciapi@sha256:76b14c9a044277a1a13fdf00717ee8b89be28f3b6ad698c4784134eb317d53b3", "docker-registry.default.svc:5000/automation-gleim/aciapi:latest" ], "sizeBytes": 877838184 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/ftpclient@sha256:1c37b83ddcc56e2308d28a7c66043032afd158be16f19c18002a355283cb9615", "docker-registry.default.svc:5000/automation-prodtest/ftpclient:latest" ], "sizeBytes": 877796666 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/ftpclient@sha256:7e8b384c929d652392da54d6202cfb47c05546f0d7514e34706be1c65c2346a0", "docker-registry.default.svc:5000/automation-gleim/ftpclient:latest" ], "sizeBytes": 877796072 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/vcenterfileclient@sha256:f436f8c61843da62e6a657e6136343d0dec4d14cfd6c900745b0438d73bf58b3" ], "sizeBytes": 877795112 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/dnsclient@sha256:d532fb313412db2ee38f4349342743037dcecb1f9a88e95cbe69475592009400" ], "sizeBytes": 877794715 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/vcenterfileclient@sha256:d6a637d8e57cc73d7780ae24ea91e78c3acb90b8520c597edd7bc022bdab9d0d", "docker-registry.default.svc:5000/automation-prod/vcenterfileclient:latest" ], "sizeBytes": 877794354 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/autopython35@sha256:acc862ede3fd33ee14d159244406554f96e1e7c5a4aade11c48b3934e49b8a78" ], "sizeBytes": 877705841 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/autopython35@sha256:05eeb14f717c95193452f75a622fe6fc2c72a3bd3a507ef0a4dbbce7b2aaf39d" ], "sizeBytes": 877705474 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/autopython35@sha256:e1bd9cf42349635d48a0b536f90d2159f51aece7fbc5ea68a398d49df5a14051" ], "sizeBytes": 877705134 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/autopython35@sha256:976d3441e49ff4ec24df483813259aae8024fbba5c7c2826fe0db0a50caaa443" ], "sizeBytes": 877705083 }, { "names": [ "docker-registry.default.svc:5000/automation-rick/vcenterfileclient@sha256:c583c4e8af3183bbfb9dfa7840aca33d455ca2e41d79af3d71a1a904b3fb2a67" ], "sizeBytes": 877174269 }, { "names": [ "docker-registry.default.svc:5000/automation-rick/autopython35@sha256:1cb098cb5dabb124b9c6790178a4d78916c55a5fba3fef20d0b0c13f05fecdb3" ], "sizeBytes": 877089035 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/autopython35@sha256:9f8d828e1038702124d067e0f34d1770686bfd33733d40129d407b01a2a3d501" ], "sizeBytes": 877019958 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/ftpclient@sha256:d7c83b0067b8fd178f2b5d3fb49145a910dd22dfb238424ecd2a2c9009fd5f6d" ], "sizeBytes": 876893822 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/autopython35@sha256:6a9d3d076ef3ac87d510886add4b7fbdcfb0fee1d95f97389fea5d316b66abd5" ], "sizeBytes": 876797832 }, { "names": [ "docker-registry.default.svc:5000/automation-develop/autopython35_networkapi@sha256:bfe73eb38d007ec3566e2f657640c5e85405926baa14dd9afe0f6135a8506c07", "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi@sha256:bfe73eb38d007ec3566e2f657640c5e85405926baa14dd9afe0f6135a8506c07" ], "sizeBytes": 814403688 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe", "docker-registry.default.svc:5000/automation-prod/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe" ], "sizeBytes": 813911535 }, { "names": [ "docker-registry.default.svc:5000/automation-develop/autopython35_networkapi@sha256:f1dc698b6363c074bf39f87b4ef9f2deae24b9861361fefa22c8d465ef264b23", "docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:f1dc698b6363c074bf39f87b4ef9f2deae24b9861361fefa22c8d465ef264b23" ], "sizeBytes": 813911394 } ], "nodeInfo": { "architecture": "amd64", "bootID": "b5b6f8ae-b68b-426e-a649-47037eb36878", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422AE278-B4A6-EC38-75F2-6D7816838230" } } } ], "returncode": 0 }, "state": "list" } META: ran handlers META: ran handlers PLAY [Restart nodes] ******************************************************************************************************************************************************************************************************************************************************************************************************** PLAY [Restart nodes] ******************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [restart node] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:32 Wednesday 09 January 2019 15:51:17 +0100 (0:00:11.382) 0:11:52.136 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node05.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TimeoutStopUSec": "1min 30s", "ControlGroup": "/system.slice/atomic-openshift-node.service", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ActiveExitTimestamp": "Wed 2019-01-09 14:52:17 CET", "ExecMainCode": "0", "UnitFileState": "enabled", "ExecMainPID": "88136", "LimitSIGPENDING": "63382", "FileDescriptorStoreMax": "0", "LoadState": "loaded", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "10162201961204", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "10162201961365", "StandardError": "inherit", "AssertTimestamp": "Wed 2019-01-09 14:52:18 CET", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "InactiveExitTimestamp": "Wed 2019-01-09 14:52:18 CET", "WatchdogTimestamp": "Wed 2019-01-09 14:52:18 CET", "NoNewPrivileges": "no", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "Before": "multi-user.target shutdown.target", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "10162201355480", "SendSIGHUP": "no", "TimeoutStartUSec": "5min", "Type": "notify", "SyslogPriority": "30", "SameProcessGroup": "no", "MountFlags": "0", "LimitNPROC": "63382", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "ExecMainStartTimestamp": "Wed 2019-01-09 14:52:18 CET", "SyslogIdentifier": "atomic-openshift-node", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "-999", "Documentation": "https://github.com/openshift/origin", "StartLimitBurst": "5", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "SecureBits": "0", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "running", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestampMonotonic": "10162201351323", "MainPID": "88136", "StartupBlockIOWeight": "18446744073709551615", "ActiveEnterTimestamp": "Wed 2019-01-09 14:52:18 CET", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "active", "Nice": "0", "LimitDATA": "18446744073709551615", "UnitFilePreset": "disabled", "MemoryCurrent": "97746944", "LimitRTTIME": "18446744073709551615", "WantedBy": "multi-user.target", "TasksCurrent": "18446744073709551615", "RestartUSec": "5s", "ConditionTimestamp": "Wed 2019-01-09 14:52:18 CET", "CPUAccounting": "yes", "RemainAfterExit": "no", "RequiresMountsFor": "/var/lib/origin", "PrivateNetwork": "no", "Restart": "always", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "RefuseManualStop": "no", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "InactiveEnterTimestamp": "Wed 2019-01-09 14:52:18 CET", "StandardInput": "null", "AssertTimestampMonotonic": "10162201351324", "DefaultDependencies": "yes", "Requires": "-.mount var.mount basic.target", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "Slice": "system.slice", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "main", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "PrivateTmp": "no", "OnFailureJobMode": "replace", "AssertResult": "yes", "LimitLOCKS": "18446744073709551615", "ExecMainStartTimestampMonotonic": "10162201355344", "AllowIsolate": "no", "Wants": "docker.service system.slice dnsmasq.service", "After": "var.mount ntpd.service systemd-journald.socket docker.service system.slice basic.target chronyd.service dnsmasq.service -.mount", "FailureAction": "none", "CanIsolate": "no", "Conflicts": "shutdown.target", "StandardOutput": "journal", "WorkingDirectory": "/var/lib/origin", "InactiveEnterTimestampMonotonic": "10162201336747", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "Transient": "no", "IOScheduling": "0", "Description": "OpenShift Node", "ActiveExitTimestampMonotonic": "10162201316836", "CanReload": "no", "ControlPID": "0", "LimitNICE": "0", "BlockIOWeight": "18446744073709551615", "Names": "atomic-openshift-node.service", "ProtectSystem": "no", "PrivateDevices": "no", "Id": "atomic-openshift-node.service"}, "invocation": {"module_args": {"daemon-reload": true, "force": null, "name": "atomic-openshift-node", "enabled": null, "daemon_reload": true, "state": "restarted", "no_block": false, "user": false, "masked": null}}, "state": "started", "changed": true, "name": "atomic-openshift-node"}\n', '') changed: [sp-os-node05.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "daemon-reload": true, "daemon_reload": true, "enabled": null, "force": null, "masked": null, "name": "atomic-openshift-node", "no_block": false, "state": "restarted", "user": false } }, "name": "atomic-openshift-node", "state": "started", "status": { "ActiveEnterTimestamp": "Wed 2019-01-09 14:52:18 CET", "ActiveEnterTimestampMonotonic": "10162201961365", "ActiveExitTimestamp": "Wed 2019-01-09 14:52:17 CET", "ActiveExitTimestampMonotonic": "10162201316836", "ActiveState": "active", "After": "var.mount ntpd.service systemd-journald.socket docker.service system.slice basic.target chronyd.service dnsmasq.service -.mount", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-01-09 14:52:18 CET", "AssertTimestampMonotonic": "10162201351324", "Before": "multi-user.target shutdown.target", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-01-09 14:52:18 CET", "ConditionTimestampMonotonic": "10162201351323", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/atomic-openshift-node.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "OpenShift Node", "DevicePolicy": "auto", "Documentation": "https://github.com/openshift/origin", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "88136", "ExecMainStartTimestamp": "Wed 2019-01-09 14:52:18 CET", "ExecMainStartTimestampMonotonic": "10162201355344", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "atomic-openshift-node.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-01-09 14:52:18 CET", "InactiveEnterTimestampMonotonic": "10162201336747", "InactiveExitTimestamp": "Wed 2019-01-09 14:52:18 CET", "InactiveExitTimestampMonotonic": "10162201355480", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "63382", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "63382", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "88136", "MemoryAccounting": "yes", "MemoryCurrent": "97746944", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "atomic-openshift-node.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "-999", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "-.mount var.mount basic.target", "RequiresMountsFor": "/var/lib/origin", "Restart": "always", "RestartUSec": "5s", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogIdentifier": "atomic-openshift-node", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker.service system.slice dnsmasq.service", "WatchdogTimestamp": "Wed 2019-01-09 14:52:18 CET", "WatchdogTimestampMonotonic": "10162201961204", "WatchdogUSec": "0", "WorkingDirectory": "/var/lib/origin" } } TASK [Wait for node to be ready] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:38 Wednesday 09 January 2019 15:51:19 +0100 (0:00:01.938) 0:11:54.074 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node05.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node05.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.81.88"}, {"type": "Hostname", "address": "sp-os-node05.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "8f04876c-fe3c-44ee-8173-81b6068eaab7", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422AF748-24CB-5955-94B8-40EC6727214E", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-11-14T12:55:00Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:51:19Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-14T12:55:00Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:51:19Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T10:08:03Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:51:19Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:51:19Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:51:19Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T21:31:40Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:51:19Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node05.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node05.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node05.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871252", "creationTimestamp": "2018-05-14T13:13:33Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node05.os.ad.scanplus.de", "uid": "9a4663d0-5778-11e8-9cd3-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (36 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node05.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node05.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-05-14T13:13:33Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node05.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "region": "primary", "update.group": "odd", "zone": "RZ-LM07" }, "name": "sp-os-node05.os.ad.scanplus.de", "resourceVersion": "93871252", "selfLink": "/api/v1/nodes/sp-os-node05.os.ad.scanplus.de", "uid": "9a4663d0-5778-11e8-9cd3-005056aa3492" }, "spec": { "externalID": "sp-os-node05.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.81.88", "type": "InternalIP" }, { "address": "sp-os-node05.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:51:19Z", "lastTransitionTime": "2018-11-14T12:55:00Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:51:19Z", "lastTransitionTime": "2018-11-14T12:55:00Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:19Z", "lastTransitionTime": "2019-01-09T10:08:03Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:19Z", "lastTransitionTime": "2019-01-09T14:51:19Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:51:19Z", "lastTransitionTime": "2018-09-13T21:31:40Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "8f04876c-fe3c-44ee-8173-81b6068eaab7", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422AF748-24CB-5955-94B8-40EC6727214E" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node05.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node05.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.81.88"}, {"type": "Hostname", "address": "sp-os-node05.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "8f04876c-fe3c-44ee-8173-81b6068eaab7", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422AF748-24CB-5955-94B8-40EC6727214E", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-11-14T12:55:00Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:51:19Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-14T12:55:00Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:51:19Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T10:08:03Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:51:19Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:51:19Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:51:19Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T21:31:40Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:51:19Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node05.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node05.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node05.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871252", "creationTimestamp": "2018-05-14T13:13:33Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node05.os.ad.scanplus.de", "uid": "9a4663d0-5778-11e8-9cd3-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (35 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node05.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node05.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-05-14T13:13:33Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node05.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "region": "primary", "update.group": "odd", "zone": "RZ-LM07" }, "name": "sp-os-node05.os.ad.scanplus.de", "resourceVersion": "93871252", "selfLink": "/api/v1/nodes/sp-os-node05.os.ad.scanplus.de", "uid": "9a4663d0-5778-11e8-9cd3-005056aa3492" }, "spec": { "externalID": "sp-os-node05.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.81.88", "type": "InternalIP" }, { "address": "sp-os-node05.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:51:19Z", "lastTransitionTime": "2018-11-14T12:55:00Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:51:19Z", "lastTransitionTime": "2018-11-14T12:55:00Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:19Z", "lastTransitionTime": "2019-01-09T10:08:03Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:19Z", "lastTransitionTime": "2019-01-09T14:51:19Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:51:19Z", "lastTransitionTime": "2018-09-13T21:31:40Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "8f04876c-fe3c-44ee-8173-81b6068eaab7", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422AF748-24CB-5955-94B8-40EC6727214E" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node05.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node05.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.81.88"}, {"type": "Hostname", "address": "sp-os-node05.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "8f04876c-fe3c-44ee-8173-81b6068eaab7", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422AF748-24CB-5955-94B8-40EC6727214E", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1719654738, "names": ["docker-registry.default.svc:5000/automation-rapp-blu/aidabluworkflows@sha256:bb144132f8bc09ab08aa224ca631e3aa3428e6d278e679e90ca778d89d13a28f"]}, {"sizeBytes": 1626551984, "names": ["docker.io/mrsiano/grafana-ocp@sha256:c3df94b5c3aaf16c5b393780939d30073ac897b6bdd037b2aeb64e9a52581490", "docker.io/mrsiano/grafana-ocp:latest"]}, {"sizeBytes": 1371524286, "names": ["docker-registry.default.svc:5000/mhe-blu/aidabluworkflows@sha256:6c3e03bc32b64dc9ec6f663d1c61c8578672b45fa99dcf2904670531d5a366ad"]}, {"sizeBytes": 1367415809, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:d58fcd36ac381f430007485d4d1d2234e8846c611d3abbfdcda86233721b17bd"]}, {"sizeBytes": 1367412932, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:8cc8d6a6ccbb6720f1597bc455c67161ce589c2ffeb07664b1f744454777c411", "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1367412535, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:840e0426bcade49105d1cfe6ebca240d8fabfc3cbbb9b24f81bd2a0fc8f5ed9b"]}, {"sizeBytes": 1365912867, "names": ["registry.spdev.net/aidablu/aidabluworkflows@sha256:756674d57bbfd78d7e0aad89c2861adcba4f9356dcefdaafaa06436e3b19caf3"]}, {"sizeBytes": 1365058051, "names": ["docker-registry.default.svc:5000/automation-schoenthaler-blu/aidabluworkflows@sha256:918e8988a972053c4ae030bd2202f5a7414d3e2a547e87d2823d93a42a84dcb0"]}, {"sizeBytes": 1355421688, "names": ["registry.spdev.net/aidablu/aidabluworkflows@sha256:6e904bba02adb0a244f05349d941f31d175d670fd619f15f1fe83b8449689c2e"]}, {"sizeBytes": 1322120182, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:4b4eb2d1b3e8c77808c9a231d6cac4abb9b53da383eb726f3e06d047b51863ba"]}, {"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1260369106, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:d3eff562dd38a600d93e6874f612d97decb9bee9b70a6fe6cc41a211bb3dae28"]}, {"sizeBytes": 1260329003, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:fbbe87a501e72a1c9575515c024ba3dd6f449cb939149bb5e943c23ab7468776"]}, {"sizeBytes": 1241092356, "names": ["docker-registry.default.svc:5000/blu-behrens/aidabluworkflows@sha256:b514a9ca9ffb96ed37acf27b14630e1d64768e384831fa07d29dbc1af9b86fba"]}, {"sizeBytes": 1237242995, "names": ["docker-registry.default.svc:5000/sdi-openshift/aidabluworkflows@sha256:3f792fcc2a311eecdcd37411f275099d45d34982dd62988d2e27ac04538cf925"]}, {"sizeBytes": 1237233603, "names": ["docker-registry.default.svc:5000/aidblu-132/aidabluworkflows@sha256:b8e02d587456638baa28f36fee7cbed81b25f06bfa0047ff16f91ff60ff37436", "docker-registry.default.svc:5000/aidblu-132/aidabluworkflows:latest"]}, {"sizeBytes": 1237080777, "names": ["docker-registry.default.svc:5000/automation-ziesel-blu/aidabluworkflows@sha256:04ae8faf03f4ceea2cc6148d8e4230da5f335dcfa614e3579c5097fb5477d179", "docker-registry.default.svc:5000/automation-ziesel-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1196633461, "names": ["registry.spdev.net/aidablu/mistral@sha256:9e73364d2c4c1c18048f0b222c891c04093e6dd51767ea850ab11297f5a07435", "registry.spdev.net/aidablu/mistral:7.0.1"]}, {"sizeBytes": 1196488450, "names": ["registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 997421106, "names": ["docker-registry.default.svc:5000/nodejs-test/sc-frontend-dev@sha256:09ea65db914efb89e3266e8551e167ec65e0a307da2f5c160f7f40cdb0a869d5"]}, {"sizeBytes": 873670787, "names": ["docker-registry.default.svc:5000/aida-1423/aida-portal@sha256:b91beec698791c84ff00d93929fcc7283ae28def8b5ad46e54282433ed0d3c8f"]}, {"sizeBytes": 848591512, "names": ["docker-registry.default.svc:5000/syi-test/ec-portal@sha256:ead4b74df73367479451bd0a511d99a2387a5f36412faaa2a3b7e3003e5d0adb"]}, {"sizeBytes": 783548623, "names": ["docker-registry.default.svc:5000/rapp-test/aida-blu@sha256:c02b02a84b22d969289ed8d51cde437d54630f467f0df7ffcb196a55588737dd", "docker-registry.default.svc:5000/rapp-test/aida-blu:latest"]}, {"sizeBytes": 773428925, "names": ["docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:3a9b09eb0a146040c174791748d152a2730aea5f6a029acae389baf5c9a58f7f", "docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu:develop"]}, {"sizeBytes": 769122552, "names": ["docker-registry.default.svc:5000/rapp-test/blu-python@sha256:626e42db6f10c4174cb461eba337466b8391750447eef9c9eb56ae2f2a0ac0e7"]}, {"sizeBytes": 706339178, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aida-blu@sha256:3ba31a778b1d6c4b74ac5ae1ff8d03bd6da85e084e1f57f9584dcc4fbbd69e57"]}, {"sizeBytes": 703296224, "names": ["docker-registry.default.svc:5000/blu-behrens/aida-blu@sha256:caa13f43545746e477aa01628bd8a75c87ad1a07692b9801722bab08b5081839"]}, {"sizeBytes": 702150311, "names": ["docker-registry.default.svc:5000/blu-mhe/aida-blu@sha256:eb7d6b48480c942ba2ce37084ffb2a48eb705efdb970946e105d365623fae93a", "docker-registry.default.svc:5000/blu-mhe/aida-blu:develop"]}, {"sizeBytes": 701148931, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aida-blu@sha256:1bfa38e361910c7b8cdb954e78dd89325aab4bbf5112720b3a00834c86fbb287"]}, {"sizeBytes": 699992342, "names": ["docker-registry.default.svc:5000/sdi-openshift/aida-blu@sha256:79832f0c94ea13b81343be54196e63edc5228142059fda9e3c478d404ed233ca"]}, {"sizeBytes": 685991088, "names": ["docker.io/centos/python-36-centos7@sha256:091d56e3ab03d52ef0ffac4b88e7e1fa24ea0243bfd05297882c12ff8a0ba1df", "docker.io/centos/python-36-centos7:latest"]}, {"sizeBytes": 683822752, "names": ["docker-registry.default.svc:5000/test-rapp-blu/basepython@sha256:fc4a048d9b15bb9c9c5cd4b06f9e0ccd7fca5e219b66aa81b66f5e87057022ab"]}, {"sizeBytes": 683822752, "names": ["docker-registry.default.svc:5000/test-rapp-blu/basepython@sha256:251d31126caf45726613d031e68c33813511866f514a5dca534a30d4e50e7ad1"]}, {"sizeBytes": 657678962, "names": ["docker-registry.default.svc:5000/blu-behrens/python-test@sha256:aa6cc36ac260fa89329ba2b3c694bbba23838dcaf9eacb4c7fa49fd4bdce07f9", "docker-registry.default.svc:5000/blu-behrens/python-test:latest"]}, {"sizeBytes": 630586883, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/autopython35_taggingclient@sha256:fdc08aca271e26a516ea1dadc3cb73a69374386fdd31e608bc478ae6b3698943", "docker-registry.default.svc:5000/pschoenthaler/autopython35_taggingclient@sha256:fdc08aca271e26a516ea1dadc3cb73a69374386fdd31e608bc478ae6b3698943"]}, {"sizeBytes": 630586455, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/autopython35@sha256:02df1a8c3ba08a86cdca7f1e9ebc4a8939e6d69d6eab4efbb586d7ac1c9820fd", "docker-registry.default.svc:5000/pschoenthaler-automation/autopython35@sha256:02df1a8c3ba08a86cdca7f1e9ebc4a8939e6d69d6eab4efbb586d7ac1c9820fd"]}, {"sizeBytes": 629245986, "names": ["docker-registry.default.svc:5000/automation-heine/autopython35_taggingclient@sha256:312de12001785159cc7f48016f021030c185290928d726d3792260531983e24b"]}, {"sizeBytes": 629245558, "names": ["docker-registry.default.svc:5000/automation-heine-blu/autopython35@sha256:61b94468c160728b3003b6503eff42d6d232ad7b8ed54175a0a81e45bebe4d86", "docker-registry.default.svc:5000/automation-heine/autopython35@sha256:61b94468c160728b3003b6503eff42d6d232ad7b8ed54175a0a81e45bebe4d86"]}, {"sizeBytes": 627139161, "names": ["docker-registry.default.svc:5000/openshift/python@sha256:1bc3d136fcfcdf0745c8ef25b4a8519e6a690a80129f3f26738b4978d0f1b421", "registry.access.redhat.com/rhscl/python-35-rhel7@sha256:1bc3d136fcfcdf0745c8ef25b4a8519e6a690a80129f3f26738b4978d0f1b421"]}, {"sizeBytes": 621852384, "names": ["docker-registry.default.svc:5000/servicecenter-nginx/servicecenter-backend-dev@sha256:99123df3916ba0f34a889975d0d18831792697e218a1bf394626a491d420c56d"]}, {"sizeBytes": 490733257, "names": ["docker-registry.default.svc:5000/openshift/nodejs@sha256:a9b89bb53fef405ea73f3eaff2dafa0c37c2cc988586b1a8a0e3bc19de07d4b8"]}, {"sizeBytes": 487134664, "names": ["docker-registry.default.svc:5000/openshift/mongodb@sha256:a98d7e38535780391e7dbf7ec80ce1b1f63e58d2da69cce9491138b080f5c8d0", "registry.access.redhat.com/rhscl/mongodb-34-rhel7@sha256:a98d7e38535780391e7dbf7ec80ce1b1f63e58d2da69cce9491138b080f5c8d0"]}, {"sizeBytes": 429794753, "names": ["registry.redhat.io/openshift3/ose-docker-builder@sha256:5880ac0c5326869a6f023d1607dee97d73d492e78a69a9750c27ebda93c6f004", "registry.redhat.io/openshift3/ose-docker-builder:v3.11.51"]}, {"sizeBytes": 419508201, "names": ["docker-registry.default.svc:5000/openshift/mysql@sha256:803edcbd9fda30de37a23201a3c43b7994628f946fff4c055c1c43cd22a9a4ed", "registry.access.redhat.com/rhscl/mysql-57-rhel7@sha256:803edcbd9fda30de37a23201a3c43b7994628f946fff4c055c1c43cd22a9a4ed"]}, {"sizeBytes": 361171027, "names": ["registry.redhat.io/openshift3/ose-deployer@sha256:c16be3658755a19ba6ebb7af0b2890ba264106ea9013eb0c2c3c71c8856959bb", "registry.redhat.io/openshift3/ose-deployer:v3.11.51"]}, {"sizeBytes": 335495304, "names": ["docker.io/centos/postgresql-96-centos7@sha256:d87befdd35b7c2be67e1a5060395a06386e7905353cae7d0e37e6c3119010e61"]}, {"sizeBytes": 228241928, "names": ["docker.io/openshift/oauth-proxy@sha256:4b73830ee6f7447d0921eedc3946de50016eb8f048d66ea3969abc4116f1e42a", "docker.io/openshift/oauth-proxy:v1.0.0"]}, {"sizeBytes": 222046071, "names": ["registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34"]}, {"sizeBytes": 214236553, "names": ["registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-11-14T12:55:00Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:51:29Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-14T12:55:00Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:51:29Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T10:08:03Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:51:29Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "True", "lastTransitionTime": "2019-01-09T14:51:29Z", "reason": "KubeletReady", "lastHeartbeatTime": "2019-01-09T14:51:29Z", "message": "kubelet is posting ready status", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T21:31:40Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:51:29Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node05.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node05.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node05.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871296", "creationTimestamp": "2018-05-14T13:13:33Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node05.os.ad.scanplus.de", "uid": "9a4663d0-5778-11e8-9cd3-005056aa3492"}}]}}\n', '') ok: [sp-os-node05.os.ad.scanplus.de -> sp-os-master01.os.ad.scanplus.de] => { "attempts": 3, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node05.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node05.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-05-14T13:13:33Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node05.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "region": "primary", "update.group": "odd", "zone": "RZ-LM07" }, "name": "sp-os-node05.os.ad.scanplus.de", "resourceVersion": "93871296", "selfLink": "/api/v1/nodes/sp-os-node05.os.ad.scanplus.de", "uid": "9a4663d0-5778-11e8-9cd3-005056aa3492" }, "spec": { "externalID": "sp-os-node05.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.81.88", "type": "InternalIP" }, { "address": "sp-os-node05.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:51:29Z", "lastTransitionTime": "2018-11-14T12:55:00Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:51:29Z", "lastTransitionTime": "2018-11-14T12:55:00Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:29Z", "lastTransitionTime": "2019-01-09T10:08:03Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:29Z", "lastTransitionTime": "2019-01-09T14:51:29Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:51:29Z", "lastTransitionTime": "2018-09-13T21:31:40Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "docker-registry.default.svc:5000/automation-rapp-blu/aidabluworkflows@sha256:bb144132f8bc09ab08aa224ca631e3aa3428e6d278e679e90ca778d89d13a28f" ], "sizeBytes": 1719654738 }, { "names": [ "docker.io/mrsiano/grafana-ocp@sha256:c3df94b5c3aaf16c5b393780939d30073ac897b6bdd037b2aeb64e9a52581490", "docker.io/mrsiano/grafana-ocp:latest" ], "sizeBytes": 1626551984 }, { "names": [ "docker-registry.default.svc:5000/mhe-blu/aidabluworkflows@sha256:6c3e03bc32b64dc9ec6f663d1c61c8578672b45fa99dcf2904670531d5a366ad" ], "sizeBytes": 1371524286 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:d58fcd36ac381f430007485d4d1d2234e8846c611d3abbfdcda86233721b17bd" ], "sizeBytes": 1367415809 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:8cc8d6a6ccbb6720f1597bc455c67161ce589c2ffeb07664b1f744454777c411", "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows:latest" ], "sizeBytes": 1367412932 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:840e0426bcade49105d1cfe6ebca240d8fabfc3cbbb9b24f81bd2a0fc8f5ed9b" ], "sizeBytes": 1367412535 }, { "names": [ "registry.spdev.net/aidablu/aidabluworkflows@sha256:756674d57bbfd78d7e0aad89c2861adcba4f9356dcefdaafaa06436e3b19caf3" ], "sizeBytes": 1365912867 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler-blu/aidabluworkflows@sha256:918e8988a972053c4ae030bd2202f5a7414d3e2a547e87d2823d93a42a84dcb0" ], "sizeBytes": 1365058051 }, { "names": [ "registry.spdev.net/aidablu/aidabluworkflows@sha256:6e904bba02adb0a244f05349d941f31d175d670fd619f15f1fe83b8449689c2e" ], "sizeBytes": 1355421688 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:4b4eb2d1b3e8c77808c9a231d6cac4abb9b53da383eb726f3e06d047b51863ba" ], "sizeBytes": 1322120182 }, { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:d3eff562dd38a600d93e6874f612d97decb9bee9b70a6fe6cc41a211bb3dae28" ], "sizeBytes": 1260369106 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:fbbe87a501e72a1c9575515c024ba3dd6f449cb939149bb5e943c23ab7468776" ], "sizeBytes": 1260329003 }, { "names": [ "docker-registry.default.svc:5000/blu-behrens/aidabluworkflows@sha256:b514a9ca9ffb96ed37acf27b14630e1d64768e384831fa07d29dbc1af9b86fba" ], "sizeBytes": 1241092356 }, { "names": [ "docker-registry.default.svc:5000/sdi-openshift/aidabluworkflows@sha256:3f792fcc2a311eecdcd37411f275099d45d34982dd62988d2e27ac04538cf925" ], "sizeBytes": 1237242995 }, { "names": [ "docker-registry.default.svc:5000/aidblu-132/aidabluworkflows@sha256:b8e02d587456638baa28f36fee7cbed81b25f06bfa0047ff16f91ff60ff37436", "docker-registry.default.svc:5000/aidblu-132/aidabluworkflows:latest" ], "sizeBytes": 1237233603 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel-blu/aidabluworkflows@sha256:04ae8faf03f4ceea2cc6148d8e4230da5f335dcfa614e3579c5097fb5477d179", "docker-registry.default.svc:5000/automation-ziesel-blu/aidabluworkflows:latest" ], "sizeBytes": 1237080777 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:9e73364d2c4c1c18048f0b222c891c04093e6dd51767ea850ab11297f5a07435", "registry.spdev.net/aidablu/mistral:7.0.1" ], "sizeBytes": 1196633461 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0" ], "sizeBytes": 1196488450 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "docker-registry.default.svc:5000/nodejs-test/sc-frontend-dev@sha256:09ea65db914efb89e3266e8551e167ec65e0a307da2f5c160f7f40cdb0a869d5" ], "sizeBytes": 997421106 }, { "names": [ "docker-registry.default.svc:5000/aida-1423/aida-portal@sha256:b91beec698791c84ff00d93929fcc7283ae28def8b5ad46e54282433ed0d3c8f" ], "sizeBytes": 873670787 }, { "names": [ "docker-registry.default.svc:5000/syi-test/ec-portal@sha256:ead4b74df73367479451bd0a511d99a2387a5f36412faaa2a3b7e3003e5d0adb" ], "sizeBytes": 848591512 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aida-blu@sha256:c02b02a84b22d969289ed8d51cde437d54630f467f0df7ffcb196a55588737dd", "docker-registry.default.svc:5000/rapp-test/aida-blu:latest" ], "sizeBytes": 783548623 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:3a9b09eb0a146040c174791748d152a2730aea5f6a029acae389baf5c9a58f7f", "docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu:develop" ], "sizeBytes": 773428925 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/blu-python@sha256:626e42db6f10c4174cb461eba337466b8391750447eef9c9eb56ae2f2a0ac0e7" ], "sizeBytes": 769122552 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aida-blu@sha256:3ba31a778b1d6c4b74ac5ae1ff8d03bd6da85e084e1f57f9584dcc4fbbd69e57" ], "sizeBytes": 706339178 }, { "names": [ "docker-registry.default.svc:5000/blu-behrens/aida-blu@sha256:caa13f43545746e477aa01628bd8a75c87ad1a07692b9801722bab08b5081839" ], "sizeBytes": 703296224 }, { "names": [ "docker-registry.default.svc:5000/blu-mhe/aida-blu@sha256:eb7d6b48480c942ba2ce37084ffb2a48eb705efdb970946e105d365623fae93a", "docker-registry.default.svc:5000/blu-mhe/aida-blu:develop" ], "sizeBytes": 702150311 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aida-blu@sha256:1bfa38e361910c7b8cdb954e78dd89325aab4bbf5112720b3a00834c86fbb287" ], "sizeBytes": 701148931 }, { "names": [ "docker-registry.default.svc:5000/sdi-openshift/aida-blu@sha256:79832f0c94ea13b81343be54196e63edc5228142059fda9e3c478d404ed233ca" ], "sizeBytes": 699992342 }, { "names": [ "docker.io/centos/python-36-centos7@sha256:091d56e3ab03d52ef0ffac4b88e7e1fa24ea0243bfd05297882c12ff8a0ba1df", "docker.io/centos/python-36-centos7:latest" ], "sizeBytes": 685991088 }, { "names": [ "docker-registry.default.svc:5000/test-rapp-blu/basepython@sha256:fc4a048d9b15bb9c9c5cd4b06f9e0ccd7fca5e219b66aa81b66f5e87057022ab" ], "sizeBytes": 683822752 }, { "names": [ "docker-registry.default.svc:5000/test-rapp-blu/basepython@sha256:251d31126caf45726613d031e68c33813511866f514a5dca534a30d4e50e7ad1" ], "sizeBytes": 683822752 }, { "names": [ "docker-registry.default.svc:5000/blu-behrens/python-test@sha256:aa6cc36ac260fa89329ba2b3c694bbba23838dcaf9eacb4c7fa49fd4bdce07f9", "docker-registry.default.svc:5000/blu-behrens/python-test:latest" ], "sizeBytes": 657678962 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/autopython35_taggingclient@sha256:fdc08aca271e26a516ea1dadc3cb73a69374386fdd31e608bc478ae6b3698943", "docker-registry.default.svc:5000/pschoenthaler/autopython35_taggingclient@sha256:fdc08aca271e26a516ea1dadc3cb73a69374386fdd31e608bc478ae6b3698943" ], "sizeBytes": 630586883 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/autopython35@sha256:02df1a8c3ba08a86cdca7f1e9ebc4a8939e6d69d6eab4efbb586d7ac1c9820fd", "docker-registry.default.svc:5000/pschoenthaler-automation/autopython35@sha256:02df1a8c3ba08a86cdca7f1e9ebc4a8939e6d69d6eab4efbb586d7ac1c9820fd" ], "sizeBytes": 630586455 }, { "names": [ "docker-registry.default.svc:5000/automation-heine/autopython35_taggingclient@sha256:312de12001785159cc7f48016f021030c185290928d726d3792260531983e24b" ], "sizeBytes": 629245986 }, { "names": [ "docker-registry.default.svc:5000/automation-heine-blu/autopython35@sha256:61b94468c160728b3003b6503eff42d6d232ad7b8ed54175a0a81e45bebe4d86", "docker-registry.default.svc:5000/automation-heine/autopython35@sha256:61b94468c160728b3003b6503eff42d6d232ad7b8ed54175a0a81e45bebe4d86" ], "sizeBytes": 629245558 }, { "names": [ "docker-registry.default.svc:5000/openshift/python@sha256:1bc3d136fcfcdf0745c8ef25b4a8519e6a690a80129f3f26738b4978d0f1b421", "registry.access.redhat.com/rhscl/python-35-rhel7@sha256:1bc3d136fcfcdf0745c8ef25b4a8519e6a690a80129f3f26738b4978d0f1b421" ], "sizeBytes": 627139161 }, { "names": [ "docker-registry.default.svc:5000/servicecenter-nginx/servicecenter-backend-dev@sha256:99123df3916ba0f34a889975d0d18831792697e218a1bf394626a491d420c56d" ], "sizeBytes": 621852384 }, { "names": [ "docker-registry.default.svc:5000/openshift/nodejs@sha256:a9b89bb53fef405ea73f3eaff2dafa0c37c2cc988586b1a8a0e3bc19de07d4b8" ], "sizeBytes": 490733257 }, { "names": [ "docker-registry.default.svc:5000/openshift/mongodb@sha256:a98d7e38535780391e7dbf7ec80ce1b1f63e58d2da69cce9491138b080f5c8d0", "registry.access.redhat.com/rhscl/mongodb-34-rhel7@sha256:a98d7e38535780391e7dbf7ec80ce1b1f63e58d2da69cce9491138b080f5c8d0" ], "sizeBytes": 487134664 }, { "names": [ "registry.redhat.io/openshift3/ose-docker-builder@sha256:5880ac0c5326869a6f023d1607dee97d73d492e78a69a9750c27ebda93c6f004", "registry.redhat.io/openshift3/ose-docker-builder:v3.11.51" ], "sizeBytes": 429794753 }, { "names": [ "docker-registry.default.svc:5000/openshift/mysql@sha256:803edcbd9fda30de37a23201a3c43b7994628f946fff4c055c1c43cd22a9a4ed", "registry.access.redhat.com/rhscl/mysql-57-rhel7@sha256:803edcbd9fda30de37a23201a3c43b7994628f946fff4c055c1c43cd22a9a4ed" ], "sizeBytes": 419508201 }, { "names": [ "registry.redhat.io/openshift3/ose-deployer@sha256:c16be3658755a19ba6ebb7af0b2890ba264106ea9013eb0c2c3c71c8856959bb", "registry.redhat.io/openshift3/ose-deployer:v3.11.51" ], "sizeBytes": 361171027 }, { "names": [ "docker.io/centos/postgresql-96-centos7@sha256:d87befdd35b7c2be67e1a5060395a06386e7905353cae7d0e37e6c3119010e61" ], "sizeBytes": 335495304 }, { "names": [ "docker.io/openshift/oauth-proxy@sha256:4b73830ee6f7447d0921eedc3946de50016eb8f048d66ea3969abc4116f1e42a", "docker.io/openshift/oauth-proxy:v1.0.0" ], "sizeBytes": 228241928 }, { "names": [ "registry.access.redhat.com/openshift3/prometheus-node-exporter@sha256:290ef0210f7cca5859c6224a81d36fdcb2e5dd644e9a3dc96f2fbaaba6b79935", "registry.access.redhat.com/openshift3/prometheus-node-exporter:v3.10.34" ], "sizeBytes": 222046071 }, { "names": [ "registry.access.redhat.com/openshift3/ose-pod@sha256:6c716eba6a032b5c75690407ef3be8e598047b3b37f3745b71eb67c1a64ee6e0", "registry.access.redhat.com/openshift3/ose-pod:v3.10.34" ], "sizeBytes": 214236553 } ], "nodeInfo": { "architecture": "amd64", "bootID": "8f04876c-fe3c-44ee-8173-81b6068eaab7", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422AF748-24CB-5955-94B8-40EC6727214E" } } } ], "returncode": 0 }, "state": "list" } META: ran handlers META: ran handlers PLAY [Restart nodes] ******************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [restart node] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:32 Wednesday 09 January 2019 15:51:31 +0100 (0:00:11.391) 0:12:05.466 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node06.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TimeoutStopUSec": "1min 30s", "ControlGroup": "/system.slice/atomic-openshift-node.service", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ActiveExitTimestamp": "Wed 2019-01-09 14:52:31 CET", "ExecMainCode": "0", "UnitFileState": "enabled", "ExecMainPID": "124727", "LimitSIGPENDING": "63382", "FileDescriptorStoreMax": "0", "LoadState": "loaded", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "4248479071627", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "4248479071710", "StandardError": "inherit", "AssertTimestamp": "Wed 2019-01-09 14:52:31 CET", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "InactiveExitTimestamp": "Wed 2019-01-09 14:52:31 CET", "WatchdogTimestamp": "Wed 2019-01-09 14:52:31 CET", "NoNewPrivileges": "no", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "Before": "shutdown.target multi-user.target", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "4248478439305", "SendSIGHUP": "no", "TimeoutStartUSec": "5min", "Type": "notify", "SyslogPriority": "30", "SameProcessGroup": "no", "MountFlags": "0", "LimitNPROC": "63382", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "ExecMainStartTimestamp": "Wed 2019-01-09 14:52:31 CET", "SyslogIdentifier": "atomic-openshift-node", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "-999", "Documentation": "https://github.com/openshift/origin", "StartLimitBurst": "5", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "SecureBits": "0", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "running", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestampMonotonic": "4248478437104", "MainPID": "124727", "StartupBlockIOWeight": "18446744073709551615", "ActiveEnterTimestamp": "Wed 2019-01-09 14:52:31 CET", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "active", "Nice": "0", "LimitDATA": "18446744073709551615", "UnitFilePreset": "disabled", "MemoryCurrent": "97112064", "LimitRTTIME": "18446744073709551615", "WantedBy": "multi-user.target", "TasksCurrent": "18446744073709551615", "RestartUSec": "5s", "ConditionTimestamp": "Wed 2019-01-09 14:52:31 CET", "CPUAccounting": "yes", "RemainAfterExit": "no", "RequiresMountsFor": "/var/lib/origin", "PrivateNetwork": "no", "Restart": "always", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "RefuseManualStop": "no", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "InactiveEnterTimestamp": "Wed 2019-01-09 14:52:31 CET", "StandardInput": "null", "AssertTimestampMonotonic": "4248478437105", "DefaultDependencies": "yes", "Requires": "basic.target var.mount -.mount", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "Slice": "system.slice", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "main", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "PrivateTmp": "no", "OnFailureJobMode": "replace", "AssertResult": "yes", "LimitLOCKS": "18446744073709551615", "ExecMainStartTimestampMonotonic": "4248478439221", "AllowIsolate": "no", "Wants": "docker.service dnsmasq.service system.slice", "After": "docker.service chronyd.service basic.target dnsmasq.service -.mount ntpd.service system.slice var.mount systemd-journald.socket", "FailureAction": "none", "CanIsolate": "no", "Conflicts": "shutdown.target", "StandardOutput": "journal", "WorkingDirectory": "/var/lib/origin", "InactiveEnterTimestampMonotonic": "4248478423113", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "Transient": "no", "IOScheduling": "0", "Description": "OpenShift Node", "ActiveExitTimestampMonotonic": "4248478407905", "CanReload": "no", "ControlPID": "0", "LimitNICE": "0", "BlockIOWeight": "18446744073709551615", "Names": "atomic-openshift-node.service", "ProtectSystem": "no", "PrivateDevices": "no", "Id": "atomic-openshift-node.service"}, "invocation": {"module_args": {"daemon-reload": true, "force": null, "name": "atomic-openshift-node", "enabled": null, "daemon_reload": true, "state": "restarted", "no_block": false, "user": false, "masked": null}}, "state": "started", "changed": true, "name": "atomic-openshift-node"}\n', '') changed: [sp-os-node06.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "daemon-reload": true, "daemon_reload": true, "enabled": null, "force": null, "masked": null, "name": "atomic-openshift-node", "no_block": false, "state": "restarted", "user": false } }, "name": "atomic-openshift-node", "state": "started", "status": { "ActiveEnterTimestamp": "Wed 2019-01-09 14:52:31 CET", "ActiveEnterTimestampMonotonic": "4248479071710", "ActiveExitTimestamp": "Wed 2019-01-09 14:52:31 CET", "ActiveExitTimestampMonotonic": "4248478407905", "ActiveState": "active", "After": "docker.service chronyd.service basic.target dnsmasq.service -.mount ntpd.service system.slice var.mount systemd-journald.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-01-09 14:52:31 CET", "AssertTimestampMonotonic": "4248478437105", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-01-09 14:52:31 CET", "ConditionTimestampMonotonic": "4248478437104", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/atomic-openshift-node.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "OpenShift Node", "DevicePolicy": "auto", "Documentation": "https://github.com/openshift/origin", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "124727", "ExecMainStartTimestamp": "Wed 2019-01-09 14:52:31 CET", "ExecMainStartTimestampMonotonic": "4248478439221", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "atomic-openshift-node.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-01-09 14:52:31 CET", "InactiveEnterTimestampMonotonic": "4248478423113", "InactiveExitTimestamp": "Wed 2019-01-09 14:52:31 CET", "InactiveExitTimestampMonotonic": "4248478439305", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "63382", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "63382", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "124727", "MemoryAccounting": "yes", "MemoryCurrent": "97112064", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "atomic-openshift-node.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "-999", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "basic.target var.mount -.mount", "RequiresMountsFor": "/var/lib/origin", "Restart": "always", "RestartUSec": "5s", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogIdentifier": "atomic-openshift-node", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker.service dnsmasq.service system.slice", "WatchdogTimestamp": "Wed 2019-01-09 14:52:31 CET", "WatchdogTimestampMonotonic": "4248479071627", "WatchdogUSec": "0", "WorkingDirectory": "/var/lib/origin" } } TASK [Wait for node to be ready] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:38 Wednesday 09 January 2019 15:51:33 +0100 (0:00:01.806) 0:12:07.273 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node06.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node06.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.81.89"}, {"type": "Hostname", "address": "sp-os-node06.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "2c0507a8-f930-4180-a967-757095b76082", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422AF2F5-6B74-B0B2-5D2E-B8FBEBB504B0", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-11-21T09:45:01Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:51:32Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-21T09:45:01Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:51:32Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-07T10:17:31Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:51:32Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:51:32Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:51:32Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:43:35Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:51:32Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node06.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node06.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node06.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871319", "creationTimestamp": "2018-05-14T13:13:33Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node06.os.ad.scanplus.de", "uid": "9a490870-5778-11e8-9cd3-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (36 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node06.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node06.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-05-14T13:13:33Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node06.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "region": "primary", "update.group": "even", "zone": "RZ-LM07" }, "name": "sp-os-node06.os.ad.scanplus.de", "resourceVersion": "93871319", "selfLink": "/api/v1/nodes/sp-os-node06.os.ad.scanplus.de", "uid": "9a490870-5778-11e8-9cd3-005056aa3492" }, "spec": { "externalID": "sp-os-node06.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.81.89", "type": "InternalIP" }, { "address": "sp-os-node06.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:51:32Z", "lastTransitionTime": "2018-11-21T09:45:01Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:51:32Z", "lastTransitionTime": "2018-11-21T09:45:01Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:32Z", "lastTransitionTime": "2019-01-07T10:17:31Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:32Z", "lastTransitionTime": "2019-01-09T14:51:32Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:51:32Z", "lastTransitionTime": "2018-09-13T22:43:35Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "2c0507a8-f930-4180-a967-757095b76082", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422AF2F5-6B74-B0B2-5D2E-B8FBEBB504B0" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node06.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node06.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.81.89"}, {"type": "Hostname", "address": "sp-os-node06.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "2c0507a8-f930-4180-a967-757095b76082", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422AF2F5-6B74-B0B2-5D2E-B8FBEBB504B0", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-11-21T09:45:01Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:51:32Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-21T09:45:01Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:51:32Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-07T10:17:31Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:51:32Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:51:32Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:51:32Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:43:35Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:51:32Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node06.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node06.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node06.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871319", "creationTimestamp": "2018-05-14T13:13:33Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node06.os.ad.scanplus.de", "uid": "9a490870-5778-11e8-9cd3-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (35 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node06.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node06.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-05-14T13:13:33Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node06.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "region": "primary", "update.group": "even", "zone": "RZ-LM07" }, "name": "sp-os-node06.os.ad.scanplus.de", "resourceVersion": "93871319", "selfLink": "/api/v1/nodes/sp-os-node06.os.ad.scanplus.de", "uid": "9a490870-5778-11e8-9cd3-005056aa3492" }, "spec": { "externalID": "sp-os-node06.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.81.89", "type": "InternalIP" }, { "address": "sp-os-node06.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:51:32Z", "lastTransitionTime": "2018-11-21T09:45:01Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:51:32Z", "lastTransitionTime": "2018-11-21T09:45:01Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:32Z", "lastTransitionTime": "2019-01-07T10:17:31Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:32Z", "lastTransitionTime": "2019-01-09T14:51:32Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:51:32Z", "lastTransitionTime": "2018-09-13T22:43:35Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "2c0507a8-f930-4180-a967-757095b76082", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422AF2F5-6B74-B0B2-5D2E-B8FBEBB504B0" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node06.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node06.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.81.89"}, {"type": "Hostname", "address": "sp-os-node06.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "2c0507a8-f930-4180-a967-757095b76082", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422AF2F5-6B74-B0B2-5D2E-B8FBEBB504B0", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1719654738, "names": ["docker-registry.default.svc:5000/automation-rapp-blu/aidabluworkflows@sha256:bb144132f8bc09ab08aa224ca631e3aa3428e6d278e679e90ca778d89d13a28f"]}, {"sizeBytes": 1371524286, "names": ["docker-registry.default.svc:5000/mhe-blu/aidabluworkflows@sha256:6c3e03bc32b64dc9ec6f663d1c61c8578672b45fa99dcf2904670531d5a366ad", "docker-registry.default.svc:5000/mhe-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1367428353, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:8a9b2f3497798080d790276a063440d619e44efca05ff6472159b20f5481be49"]}, {"sizeBytes": 1367415511, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:02e38f5cf4a468064c9f6b4028f5a7abb7ad3d06f53a03f165fb07363631a3b9"]}, {"sizeBytes": 1367412621, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:2d70814b7aa27f2e3997383cb900ae99c33c8fd58cc22fe6039ddc9c33e3a74b"]}, {"sizeBytes": 1367412528, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:3d8be53a8e587ecdc87c7ee6c17ce2cc929324c5ed563d9ad37e36209cc0d2c7"]}, {"sizeBytes": 1367412469, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:872fffe48f91dde2070ce3703af7db231ed7ac163a2e3393a8be0b443221f137"]}, {"sizeBytes": 1365912867, "names": ["registry.spdev.net/aidablu/aidabluworkflows@sha256:756674d57bbfd78d7e0aad89c2861adcba4f9356dcefdaafaa06436e3b19caf3"]}, {"sizeBytes": 1365050350, "names": ["docker-registry.default.svc:5000/automation-schoenthaler-blu/aidabluworkflows@sha256:b99a204669cbe8078f515d004a5fbb06272638f4ae8cc38dbd4691efe279ddd8"]}, {"sizeBytes": 1322086654, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:05b9fc7f53e8b31a0c39553f0ed9e925ee0d8fafb41570f73d40df8cd8f20982"]}, {"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1260369106, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:d3eff562dd38a600d93e6874f612d97decb9bee9b70a6fe6cc41a211bb3dae28"]}, {"sizeBytes": 1260352849, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:e15e44608b0fdb21fac7d1007506614b51b945c5ec5c3b9e576e744f8ccbe4aa"]}, {"sizeBytes": 1260333022, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:94415a49c75692b43a572a028b51cc1f2e308bd61c36bf307d8d74ba6161831a"]}, {"sizeBytes": 1241229720, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:3b1b39f4087f471cd58047e6dfc32bcde465c14b8dd5418d80381e428a000083"]}, {"sizeBytes": 1241092356, "names": ["docker-registry.default.svc:5000/blu-behrens/aidabluworkflows@sha256:b514a9ca9ffb96ed37acf27b14630e1d64768e384831fa07d29dbc1af9b86fba", "docker-registry.default.svc:5000/blu-behrens/aidabluworkflows:latest"]}, {"sizeBytes": 1237268824, "names": ["docker-registry.default.svc:5000/automation-rick-blu/aidabluworkflows@sha256:c3e853c401b780c90cdf2e6edb18873cc51334d36097e930be1682abff0749c2", "docker-registry.default.svc:5000/automation-rick-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1237242995, "names": ["docker-registry.default.svc:5000/sdi-openshift/aidabluworkflows@sha256:3f792fcc2a311eecdcd37411f275099d45d34982dd62988d2e27ac04538cf925"]}, {"sizeBytes": 1237236937, "names": ["docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:f903c8e05893254eee5fd1ab7f3535e8aa0238712a27ee35fc32d66370c1973e"]}, {"sizeBytes": 1237236834, "names": ["docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:7ab1f3df42d1e359bcde3e3e9978b10ff8d3f8c2facdfb3efa037d543ddf4a72"]}, {"sizeBytes": 1237234244, "names": ["docker-registry.default.svc:5000/aida-1423/aidabluworkflows@sha256:8d45507098026fc3be703b0b0d0def25df58a2a5c86f2b82a2639c99bdd76582"]}, {"sizeBytes": 1237234015, "names": ["docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:fc1c714c181819fc40df7cfea62ed1656ce2dbd150f58aba4ac7f1fb2157e684"]}, {"sizeBytes": 1237233908, "names": ["docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:2cff5c19a3f6f8c08e22ff0cd6e25b1955e22b4d4f97e8195756e02127a3078d", "docker-registry.default.svc:5000/rapp-test/aidabluworkflows:latest"]}, {"sizeBytes": 1237233603, "names": ["docker-registry.default.svc:5000/aidblu-132/aidabluworkflows@sha256:b8e02d587456638baa28f36fee7cbed81b25f06bfa0047ff16f91ff60ff37436"]}, {"sizeBytes": 1237080777, "names": ["docker-registry.default.svc:5000/automation-ziesel-blu/aidabluworkflows@sha256:04ae8faf03f4ceea2cc6148d8e4230da5f335dcfa614e3579c5097fb5477d179"]}, {"sizeBytes": 1237080616, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:6619697410b73151f131ecf9ffb8ce45626ddf5690739409cb2923e742688e44"]}, {"sizeBytes": 1237080436, "names": ["docker-registry.default.svc:5000/automation-ziesel-blu/aidabluworkflows@sha256:3a845810bc2213515fa4af3ac8547846fdbdc75f3e8617e67a890ccdbf21ae26", "docker-registry.default.svc:5000/automation-ziesel-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1237080414, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:4ac591ec5be9c81a7d94cd80474e5fd2b5d36d587f0db669317a2c38afcb41f3", "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1196488450, "names": ["registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0"]}, {"sizeBytes": 1196480016, "names": ["registry.spdev.net/aidablu/mistral@sha256:0e82049a566de0fa322e7298a12dca8e2afc60f42c57ce1749c6d0b1418046c4", "registry.spdev.net/aidablu/mistral:latest"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 929232262, "names": ["docker-registry.default.svc:5000/automation-gleim/aciapi@sha256:f530f18858b19e9cb180074b764c1432a76ca6a9f62a8c22e1df2c28366c0cff"]}, {"sizeBytes": 929185186, "names": ["docker-registry.default.svc:5000/automation-gleim/ftpclient@sha256:3979b2303382664016fc1997bd5a77ba1c82520a889c296d9c993a0f1762e190"]}, {"sizeBytes": 861787558, "names": ["docker-registry.default.svc:5000/automation-rapp-blu/aida-blu@sha256:74c046b4542535199998a47a99b403ca3d8ecff7c0a2775a50abc7579294ea6c"]}, {"sizeBytes": 860633271, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aida-blu@sha256:b03802a0b0e1d0e9e5ecdfc2a4e5640af3f59baf728ef8e081c52f96cc52ef31"]}, {"sizeBytes": 856218553, "names": ["docker-registry.default.svc:5000/aida-1423/aida-blu@sha256:4a23270abc7f91778764a12fd8d2684b8a8c50c1c3fae20b8658734db60566cc"]}, {"sizeBytes": 823386570, "names": ["registry.access.redhat.com/openshift3/ose-docker-builder@sha256:3e38d2ed0ef59ebec8d3d4a03b21b731c0f5fc52f76b4d7e6975b435a1d5d6bc", "registry.access.redhat.com/openshift3/ose-docker-builder:v3.10.34"]}, {"sizeBytes": 822723276, "names": ["docker-registry.default.svc:5000/automation-schoenthaler-aida/aida-portal@sha256:0f2b638cc03d6519c9aa9c284575b0e89d1302136ce4f896584793245a3d11f4"]}, {"sizeBytes": 822686028, "names": ["docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:d0f2f3c44139144f99934560ac8f7de1ba538b3c5c17803fa7af0106cb042f0e"]}, {"sizeBytes": 822684969, "names": ["docker-registry.default.svc:5000/aida-portal-dev/aida-portal@sha256:e6f56fa0ae1ae6dc6c07ec7beb94daf3dc8101e3bf146d5b5c455eb2fbdc4d70"]}, {"sizeBytes": 822681134, "names": ["docker-registry.default.svc:5000/automation-aida-dev/aida-portal@sha256:33c5ff9a8f850d1c3ba70132ff0f177ec929e4008c75814f837ad21d57ad5753", "docker-registry.default.svc:5000/automation-aida-dev/aida-portal:latest"]}, {"sizeBytes": 792421929, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:5e8132e59fcd9b043c106ded29f9b0986dc3bef120931098ee59be5cd21e4679", "registry.access.redhat.com/openshift3/ose-deployer:v3.10"]}, {"sizeBytes": 792419982, "names": ["registry.access.redhat.com/openshift3/ose-control-plane@sha256:8ca530bc30bc31ce334cb1b42a0f84f9b2f94f7980661055ceeb4350853edc0c", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10"]}, {"sizeBytes": 788614541, "names": ["registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10.34"]}, {"sizeBytes": 781207561, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aida-blu@sha256:2dcff892c277df733afa22299eeff7e56514a728800ebb43be9cd60511d1e242", "docker-registry.default.svc:5000/automation-puscasu-blu/aida-blu:develop"]}, {"sizeBytes": 773428925, "names": ["docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:3a9b09eb0a146040c174791748d152a2730aea5f6a029acae389baf5c9a58f7f"]}, {"sizeBytes": 771685032, "names": ["docker-registry.default.svc:5000/automation-rick-test/autopython35_networkapi@sha256:c28188e457335981c86e97c620e42c28ca931e6a65607f83321ba6db4d922ed3"]}, {"sizeBytes": 771684975, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/autopython35_networkapi@sha256:0d2d43076effb56513ef8e2b80dcf5a35506cec158f2fce29c0b3d6bf4bb3690", "docker-registry.default.svc:5000/pschoenthaler-automation/autopython35_networkapi@sha256:0d2d43076effb56513ef8e2b80dcf5a35506cec158f2fce29c0b3d6bf4bb3690", "docker-registry.default.svc:5000/pschoenthaler/autopython35_networkapi@sha256:0d2d43076effb56513ef8e2b80dcf5a35506cec158f2fce29c0b3d6bf4bb3690"]}, {"sizeBytes": 770553381, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/autopython35_sshclient@sha256:a9ebf974d265681eb1dfe0f21595e44cf3ee09cb3808989a6fefed41d9b7d624", "docker-registry.default.svc:5000/pschoenthaler/autopython35_sshclient@sha256:a9ebf974d265681eb1dfe0f21595e44cf3ee09cb3808989a6fefed41d9b7d624"]}, {"sizeBytes": 769122552, "names": ["docker-registry.default.svc:5000/rapp-test/blu-python@sha256:4da424c0951f33c054985d2cc6d0ab66558c010749e9d10d8721097a30879185"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-11-21T09:45:01Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:51:42Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-21T09:45:01Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:51:42Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-07T10:17:31Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:51:42Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "True", "lastTransitionTime": "2019-01-09T14:51:42Z", "reason": "KubeletReady", "lastHeartbeatTime": "2019-01-09T14:51:42Z", "message": "kubelet is posting ready status", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:43:35Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:51:42Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node06.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node06.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node06.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871361", "creationTimestamp": "2018-05-14T13:13:33Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node06.os.ad.scanplus.de", "uid": "9a490870-5778-11e8-9cd3-005056aa3492"}}]}}\n', '') ok: [sp-os-node06.os.ad.scanplus.de -> sp-os-master01.os.ad.scanplus.de] => { "attempts": 3, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node06.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node06.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-05-14T13:13:33Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node06.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "region": "primary", "update.group": "even", "zone": "RZ-LM07" }, "name": "sp-os-node06.os.ad.scanplus.de", "resourceVersion": "93871361", "selfLink": "/api/v1/nodes/sp-os-node06.os.ad.scanplus.de", "uid": "9a490870-5778-11e8-9cd3-005056aa3492" }, "spec": { "externalID": "sp-os-node06.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.81.89", "type": "InternalIP" }, { "address": "sp-os-node06.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:51:42Z", "lastTransitionTime": "2018-11-21T09:45:01Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:51:42Z", "lastTransitionTime": "2018-11-21T09:45:01Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:42Z", "lastTransitionTime": "2019-01-07T10:17:31Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:42Z", "lastTransitionTime": "2019-01-09T14:51:42Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:51:42Z", "lastTransitionTime": "2018-09-13T22:43:35Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "docker-registry.default.svc:5000/automation-rapp-blu/aidabluworkflows@sha256:bb144132f8bc09ab08aa224ca631e3aa3428e6d278e679e90ca778d89d13a28f" ], "sizeBytes": 1719654738 }, { "names": [ "docker-registry.default.svc:5000/mhe-blu/aidabluworkflows@sha256:6c3e03bc32b64dc9ec6f663d1c61c8578672b45fa99dcf2904670531d5a366ad", "docker-registry.default.svc:5000/mhe-blu/aidabluworkflows:latest" ], "sizeBytes": 1371524286 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:8a9b2f3497798080d790276a063440d619e44efca05ff6472159b20f5481be49" ], "sizeBytes": 1367428353 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:02e38f5cf4a468064c9f6b4028f5a7abb7ad3d06f53a03f165fb07363631a3b9" ], "sizeBytes": 1367415511 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:2d70814b7aa27f2e3997383cb900ae99c33c8fd58cc22fe6039ddc9c33e3a74b" ], "sizeBytes": 1367412621 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:3d8be53a8e587ecdc87c7ee6c17ce2cc929324c5ed563d9ad37e36209cc0d2c7" ], "sizeBytes": 1367412528 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:872fffe48f91dde2070ce3703af7db231ed7ac163a2e3393a8be0b443221f137" ], "sizeBytes": 1367412469 }, { "names": [ "registry.spdev.net/aidablu/aidabluworkflows@sha256:756674d57bbfd78d7e0aad89c2861adcba4f9356dcefdaafaa06436e3b19caf3" ], "sizeBytes": 1365912867 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler-blu/aidabluworkflows@sha256:b99a204669cbe8078f515d004a5fbb06272638f4ae8cc38dbd4691efe279ddd8" ], "sizeBytes": 1365050350 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:05b9fc7f53e8b31a0c39553f0ed9e925ee0d8fafb41570f73d40df8cd8f20982" ], "sizeBytes": 1322086654 }, { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:d3eff562dd38a600d93e6874f612d97decb9bee9b70a6fe6cc41a211bb3dae28" ], "sizeBytes": 1260369106 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:e15e44608b0fdb21fac7d1007506614b51b945c5ec5c3b9e576e744f8ccbe4aa" ], "sizeBytes": 1260352849 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:94415a49c75692b43a572a028b51cc1f2e308bd61c36bf307d8d74ba6161831a" ], "sizeBytes": 1260333022 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:3b1b39f4087f471cd58047e6dfc32bcde465c14b8dd5418d80381e428a000083" ], "sizeBytes": 1241229720 }, { "names": [ "docker-registry.default.svc:5000/blu-behrens/aidabluworkflows@sha256:b514a9ca9ffb96ed37acf27b14630e1d64768e384831fa07d29dbc1af9b86fba", "docker-registry.default.svc:5000/blu-behrens/aidabluworkflows:latest" ], "sizeBytes": 1241092356 }, { "names": [ "docker-registry.default.svc:5000/automation-rick-blu/aidabluworkflows@sha256:c3e853c401b780c90cdf2e6edb18873cc51334d36097e930be1682abff0749c2", "docker-registry.default.svc:5000/automation-rick-blu/aidabluworkflows:latest" ], "sizeBytes": 1237268824 }, { "names": [ "docker-registry.default.svc:5000/sdi-openshift/aidabluworkflows@sha256:3f792fcc2a311eecdcd37411f275099d45d34982dd62988d2e27ac04538cf925" ], "sizeBytes": 1237242995 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:f903c8e05893254eee5fd1ab7f3535e8aa0238712a27ee35fc32d66370c1973e" ], "sizeBytes": 1237236937 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:7ab1f3df42d1e359bcde3e3e9978b10ff8d3f8c2facdfb3efa037d543ddf4a72" ], "sizeBytes": 1237236834 }, { "names": [ "docker-registry.default.svc:5000/aida-1423/aidabluworkflows@sha256:8d45507098026fc3be703b0b0d0def25df58a2a5c86f2b82a2639c99bdd76582" ], "sizeBytes": 1237234244 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:fc1c714c181819fc40df7cfea62ed1656ce2dbd150f58aba4ac7f1fb2157e684" ], "sizeBytes": 1237234015 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:2cff5c19a3f6f8c08e22ff0cd6e25b1955e22b4d4f97e8195756e02127a3078d", "docker-registry.default.svc:5000/rapp-test/aidabluworkflows:latest" ], "sizeBytes": 1237233908 }, { "names": [ "docker-registry.default.svc:5000/aidblu-132/aidabluworkflows@sha256:b8e02d587456638baa28f36fee7cbed81b25f06bfa0047ff16f91ff60ff37436" ], "sizeBytes": 1237233603 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel-blu/aidabluworkflows@sha256:04ae8faf03f4ceea2cc6148d8e4230da5f335dcfa614e3579c5097fb5477d179" ], "sizeBytes": 1237080777 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:6619697410b73151f131ecf9ffb8ce45626ddf5690739409cb2923e742688e44" ], "sizeBytes": 1237080616 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel-blu/aidabluworkflows@sha256:3a845810bc2213515fa4af3ac8547846fdbdc75f3e8617e67a890ccdbf21ae26", "docker-registry.default.svc:5000/automation-ziesel-blu/aidabluworkflows:latest" ], "sizeBytes": 1237080436 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:4ac591ec5be9c81a7d94cd80474e5fd2b5d36d587f0db669317a2c38afcb41f3", "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows:latest" ], "sizeBytes": 1237080414 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0" ], "sizeBytes": 1196488450 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:0e82049a566de0fa322e7298a12dca8e2afc60f42c57ce1749c6d0b1418046c4", "registry.spdev.net/aidablu/mistral:latest" ], "sizeBytes": 1196480016 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/aciapi@sha256:f530f18858b19e9cb180074b764c1432a76ca6a9f62a8c22e1df2c28366c0cff" ], "sizeBytes": 929232262 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/ftpclient@sha256:3979b2303382664016fc1997bd5a77ba1c82520a889c296d9c993a0f1762e190" ], "sizeBytes": 929185186 }, { "names": [ "docker-registry.default.svc:5000/automation-rapp-blu/aida-blu@sha256:74c046b4542535199998a47a99b403ca3d8ecff7c0a2775a50abc7579294ea6c" ], "sizeBytes": 861787558 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aida-blu@sha256:b03802a0b0e1d0e9e5ecdfc2a4e5640af3f59baf728ef8e081c52f96cc52ef31" ], "sizeBytes": 860633271 }, { "names": [ "docker-registry.default.svc:5000/aida-1423/aida-blu@sha256:4a23270abc7f91778764a12fd8d2684b8a8c50c1c3fae20b8658734db60566cc" ], "sizeBytes": 856218553 }, { "names": [ "registry.access.redhat.com/openshift3/ose-docker-builder@sha256:3e38d2ed0ef59ebec8d3d4a03b21b731c0f5fc52f76b4d7e6975b435a1d5d6bc", "registry.access.redhat.com/openshift3/ose-docker-builder:v3.10.34" ], "sizeBytes": 823386570 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler-aida/aida-portal@sha256:0f2b638cc03d6519c9aa9c284575b0e89d1302136ce4f896584793245a3d11f4" ], "sizeBytes": 822723276 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:d0f2f3c44139144f99934560ac8f7de1ba538b3c5c17803fa7af0106cb042f0e" ], "sizeBytes": 822686028 }, { "names": [ "docker-registry.default.svc:5000/aida-portal-dev/aida-portal@sha256:e6f56fa0ae1ae6dc6c07ec7beb94daf3dc8101e3bf146d5b5c455eb2fbdc4d70" ], "sizeBytes": 822684969 }, { "names": [ "docker-registry.default.svc:5000/automation-aida-dev/aida-portal@sha256:33c5ff9a8f850d1c3ba70132ff0f177ec929e4008c75814f837ad21d57ad5753", "docker-registry.default.svc:5000/automation-aida-dev/aida-portal:latest" ], "sizeBytes": 822681134 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:5e8132e59fcd9b043c106ded29f9b0986dc3bef120931098ee59be5cd21e4679", "registry.access.redhat.com/openshift3/ose-deployer:v3.10" ], "sizeBytes": 792421929 }, { "names": [ "registry.access.redhat.com/openshift3/ose-control-plane@sha256:8ca530bc30bc31ce334cb1b42a0f84f9b2f94f7980661055ceeb4350853edc0c", "registry.access.redhat.com/openshift3/ose-control-plane:v3.10" ], "sizeBytes": 792419982 }, { "names": [ "registry.access.redhat.com/openshift3/ose-deployer@sha256:a183db6f8ff4db292d6e0650cbb8ce19e9976e6076d345e77d593badc26905c4", "registry.access.redhat.com/openshift3/ose-deployer:v3.10.34" ], "sizeBytes": 788614541 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aida-blu@sha256:2dcff892c277df733afa22299eeff7e56514a728800ebb43be9cd60511d1e242", "docker-registry.default.svc:5000/automation-puscasu-blu/aida-blu:develop" ], "sizeBytes": 781207561 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:3a9b09eb0a146040c174791748d152a2730aea5f6a029acae389baf5c9a58f7f" ], "sizeBytes": 773428925 }, { "names": [ "docker-registry.default.svc:5000/automation-rick-test/autopython35_networkapi@sha256:c28188e457335981c86e97c620e42c28ca931e6a65607f83321ba6db4d922ed3" ], "sizeBytes": 771685032 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/autopython35_networkapi@sha256:0d2d43076effb56513ef8e2b80dcf5a35506cec158f2fce29c0b3d6bf4bb3690", "docker-registry.default.svc:5000/pschoenthaler-automation/autopython35_networkapi@sha256:0d2d43076effb56513ef8e2b80dcf5a35506cec158f2fce29c0b3d6bf4bb3690", "docker-registry.default.svc:5000/pschoenthaler/autopython35_networkapi@sha256:0d2d43076effb56513ef8e2b80dcf5a35506cec158f2fce29c0b3d6bf4bb3690" ], "sizeBytes": 771684975 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/autopython35_sshclient@sha256:a9ebf974d265681eb1dfe0f21595e44cf3ee09cb3808989a6fefed41d9b7d624", "docker-registry.default.svc:5000/pschoenthaler/autopython35_sshclient@sha256:a9ebf974d265681eb1dfe0f21595e44cf3ee09cb3808989a6fefed41d9b7d624" ], "sizeBytes": 770553381 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/blu-python@sha256:4da424c0951f33c054985d2cc6d0ab66558c010749e9d10d8721097a30879185" ], "sizeBytes": 769122552 } ], "nodeInfo": { "architecture": "amd64", "bootID": "2c0507a8-f930-4180-a967-757095b76082", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422AF2F5-6B74-B0B2-5D2E-B8FBEBB504B0" } } } ], "returncode": 0 }, "state": "list" } META: ran handlers META: ran handlers PLAY [Restart nodes] ******************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [restart node] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:32 Wednesday 09 January 2019 15:51:44 +0100 (0:00:11.527) 0:12:18.800 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node07.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TimeoutStopUSec": "1min 30s", "ControlGroup": "/system.slice/atomic-openshift-node.service", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ActiveExitTimestamp": "Wed 2019-01-09 14:52:44 CET", "ExecMainCode": "0", "UnitFileState": "enabled", "ExecMainPID": "24968", "LimitSIGPENDING": "63382", "FileDescriptorStoreMax": "0", "LoadState": "loaded", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "10162230837514", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "10162230837607", "StandardError": "inherit", "AssertTimestamp": "Wed 2019-01-09 14:52:44 CET", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "InactiveExitTimestamp": "Wed 2019-01-09 14:52:44 CET", "WatchdogTimestamp": "Wed 2019-01-09 14:52:45 CET", "NoNewPrivileges": "no", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "Before": "multi-user.target shutdown.target", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "10162229797466", "SendSIGHUP": "no", "TimeoutStartUSec": "5min", "Type": "notify", "SyslogPriority": "30", "SameProcessGroup": "no", "MountFlags": "0", "LimitNPROC": "63382", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "ExecMainStartTimestamp": "Wed 2019-01-09 14:52:44 CET", "SyslogIdentifier": "atomic-openshift-node", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "-999", "Documentation": "https://github.com/openshift/origin", "StartLimitBurst": "5", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "SecureBits": "0", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "running", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestampMonotonic": "10162229795370", "MainPID": "24968", "StartupBlockIOWeight": "18446744073709551615", "ActiveEnterTimestamp": "Wed 2019-01-09 14:52:45 CET", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "active", "Nice": "0", "LimitDATA": "18446744073709551615", "UnitFilePreset": "disabled", "MemoryCurrent": "107544576", "LimitRTTIME": "18446744073709551615", "WantedBy": "multi-user.target", "TasksCurrent": "18446744073709551615", "RestartUSec": "5s", "ConditionTimestamp": "Wed 2019-01-09 14:52:44 CET", "CPUAccounting": "yes", "RemainAfterExit": "no", "RequiresMountsFor": "/var/lib/origin", "PrivateNetwork": "no", "Restart": "always", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "RefuseManualStop": "no", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "InactiveEnterTimestamp": "Wed 2019-01-09 14:52:44 CET", "StandardInput": "null", "AssertTimestampMonotonic": "10162229795370", "DefaultDependencies": "yes", "Requires": "var.mount basic.target -.mount", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "Slice": "system.slice", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "main", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "PrivateTmp": "no", "OnFailureJobMode": "replace", "AssertResult": "yes", "LimitLOCKS": "18446744073709551615", "ExecMainStartTimestampMonotonic": "10162229797396", "AllowIsolate": "no", "Wants": "docker.service dnsmasq.service system.slice", "After": "basic.target ntpd.service var.mount -.mount systemd-journald.socket system.slice dnsmasq.service docker.service chronyd.service", "FailureAction": "none", "CanIsolate": "no", "Conflicts": "shutdown.target", "StandardOutput": "journal", "WorkingDirectory": "/var/lib/origin", "InactiveEnterTimestampMonotonic": "10162229785301", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "Transient": "no", "IOScheduling": "0", "Description": "OpenShift Node", "ActiveExitTimestampMonotonic": "10162229766498", "CanReload": "no", "ControlPID": "0", "LimitNICE": "0", "BlockIOWeight": "18446744073709551615", "Names": "atomic-openshift-node.service", "ProtectSystem": "no", "PrivateDevices": "no", "Id": "atomic-openshift-node.service"}, "invocation": {"module_args": {"daemon-reload": true, "force": null, "name": "atomic-openshift-node", "enabled": null, "daemon_reload": true, "state": "restarted", "no_block": false, "user": false, "masked": null}}, "state": "started", "changed": true, "name": "atomic-openshift-node"}\n', '') changed: [sp-os-node07.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "daemon-reload": true, "daemon_reload": true, "enabled": null, "force": null, "masked": null, "name": "atomic-openshift-node", "no_block": false, "state": "restarted", "user": false } }, "name": "atomic-openshift-node", "state": "started", "status": { "ActiveEnterTimestamp": "Wed 2019-01-09 14:52:45 CET", "ActiveEnterTimestampMonotonic": "10162230837607", "ActiveExitTimestamp": "Wed 2019-01-09 14:52:44 CET", "ActiveExitTimestampMonotonic": "10162229766498", "ActiveState": "active", "After": "basic.target ntpd.service var.mount -.mount systemd-journald.socket system.slice dnsmasq.service docker.service chronyd.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-01-09 14:52:44 CET", "AssertTimestampMonotonic": "10162229795370", "Before": "multi-user.target shutdown.target", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-01-09 14:52:44 CET", "ConditionTimestampMonotonic": "10162229795370", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/atomic-openshift-node.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "OpenShift Node", "DevicePolicy": "auto", "Documentation": "https://github.com/openshift/origin", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "24968", "ExecMainStartTimestamp": "Wed 2019-01-09 14:52:44 CET", "ExecMainStartTimestampMonotonic": "10162229797396", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "atomic-openshift-node.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-01-09 14:52:44 CET", "InactiveEnterTimestampMonotonic": "10162229785301", "InactiveExitTimestamp": "Wed 2019-01-09 14:52:44 CET", "InactiveExitTimestampMonotonic": "10162229797466", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "63382", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "63382", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "24968", "MemoryAccounting": "yes", "MemoryCurrent": "107544576", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "atomic-openshift-node.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "-999", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "var.mount basic.target -.mount", "RequiresMountsFor": "/var/lib/origin", "Restart": "always", "RestartUSec": "5s", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogIdentifier": "atomic-openshift-node", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker.service dnsmasq.service system.slice", "WatchdogTimestamp": "Wed 2019-01-09 14:52:45 CET", "WatchdogTimestampMonotonic": "10162230837514", "WatchdogUSec": "0", "WorkingDirectory": "/var/lib/origin" } } TASK [Wait for node to be ready] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:38 Wednesday 09 January 2019 15:51:46 +0100 (0:00:02.019) 0:12:20.820 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node07.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node07.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249840Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.81.90"}, {"type": "Hostname", "address": "sp-os-node07.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "c8322728-b7df-42b4-8a87-aee1d595710d", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422AA3AF-4194-66BE-734A-E269F11EDB55", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147440Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-11-14T12:45:19Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:51:46Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-14T12:45:19Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:51:46Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-12-20T09:47:34Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:51:46Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:51:46Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:51:46Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T21:35:14Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:51:46Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node07.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node07.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node07.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871382", "creationTimestamp": "2018-05-14T13:13:33Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node07.os.ad.scanplus.de", "uid": "9a63d9ad-5778-11e8-9cd3-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (36 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node07.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node07.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-05-14T13:13:33Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node07.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "region": "primary", "update.group": "odd", "zone": "RZ-LM07" }, "name": "sp-os-node07.os.ad.scanplus.de", "resourceVersion": "93871382", "selfLink": "/api/v1/nodes/sp-os-node07.os.ad.scanplus.de", "uid": "9a63d9ad-5778-11e8-9cd3-005056aa3492" }, "spec": { "externalID": "sp-os-node07.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.81.90", "type": "InternalIP" }, { "address": "sp-os-node07.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147440Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249840Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:51:46Z", "lastTransitionTime": "2018-11-14T12:45:19Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:51:46Z", "lastTransitionTime": "2018-11-14T12:45:19Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:46Z", "lastTransitionTime": "2018-12-20T09:47:34Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:46Z", "lastTransitionTime": "2019-01-09T14:51:46Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:51:46Z", "lastTransitionTime": "2018-09-13T21:35:14Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "c8322728-b7df-42b4-8a87-aee1d595710d", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422AA3AF-4194-66BE-734A-E269F11EDB55" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node07.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node07.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249840Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.81.90"}, {"type": "Hostname", "address": "sp-os-node07.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "c8322728-b7df-42b4-8a87-aee1d595710d", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422AA3AF-4194-66BE-734A-E269F11EDB55", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147440Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-11-14T12:45:19Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:51:46Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-14T12:45:19Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:51:46Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-12-20T09:47:34Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:51:46Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:51:46Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:51:46Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T21:35:14Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:51:46Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node07.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node07.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node07.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871382", "creationTimestamp": "2018-05-14T13:13:33Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node07.os.ad.scanplus.de", "uid": "9a63d9ad-5778-11e8-9cd3-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (35 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node07.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node07.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-05-14T13:13:33Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node07.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "region": "primary", "update.group": "odd", "zone": "RZ-LM07" }, "name": "sp-os-node07.os.ad.scanplus.de", "resourceVersion": "93871382", "selfLink": "/api/v1/nodes/sp-os-node07.os.ad.scanplus.de", "uid": "9a63d9ad-5778-11e8-9cd3-005056aa3492" }, "spec": { "externalID": "sp-os-node07.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.81.90", "type": "InternalIP" }, { "address": "sp-os-node07.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147440Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249840Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:51:46Z", "lastTransitionTime": "2018-11-14T12:45:19Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:51:46Z", "lastTransitionTime": "2018-11-14T12:45:19Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:46Z", "lastTransitionTime": "2018-12-20T09:47:34Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:46Z", "lastTransitionTime": "2019-01-09T14:51:46Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:51:46Z", "lastTransitionTime": "2018-09-13T21:35:14Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "c8322728-b7df-42b4-8a87-aee1d595710d", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422AA3AF-4194-66BE-734A-E269F11EDB55" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node07.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node07.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249840Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.81.90"}, {"type": "Hostname", "address": "sp-os-node07.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "c8322728-b7df-42b4-8a87-aee1d595710d", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422AA3AF-4194-66BE-734A-E269F11EDB55", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147440Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 2112658666, "names": ["docker-registry.default.svc:5000/timetest/timecontrol@sha256:7810e74bfa0d0813425a90e8656582c8f030616eb744c586085b538bce8dec4a"]}, {"sizeBytes": 1719657821, "names": ["docker-registry.default.svc:5000/automation-rapp-blu/aidabluworkflows@sha256:4e36ea72daef21cf78abce6fbec5f002d505eddb6ce16b06ca24f20b41f4232b"]}, {"sizeBytes": 1719654738, "names": ["docker-registry.default.svc:5000/automation-rapp-blu/aidabluworkflows@sha256:bb144132f8bc09ab08aa224ca631e3aa3428e6d278e679e90ca778d89d13a28f", "docker-registry.default.svc:5000/automation-rapp-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1371524286, "names": ["docker-registry.default.svc:5000/mhe-blu/aidabluworkflows@sha256:6c3e03bc32b64dc9ec6f663d1c61c8578672b45fa99dcf2904670531d5a366ad"]}, {"sizeBytes": 1367428353, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:8a9b2f3497798080d790276a063440d619e44efca05ff6472159b20f5481be49"]}, {"sizeBytes": 1367428248, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:e1742428482eca5b0b0e43ca589d812b35a1920bb9f09c01cf953b0f66ef5a9a"]}, {"sizeBytes": 1367415809, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:d58fcd36ac381f430007485d4d1d2234e8846c611d3abbfdcda86233721b17bd"]}, {"sizeBytes": 1367415760, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:daa99d134becfa9b9b1768783ad45c6bca845b29d01ab8a9978bedb17bc5b59e"]}, {"sizeBytes": 1367415511, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:02e38f5cf4a468064c9f6b4028f5a7abb7ad3d06f53a03f165fb07363631a3b9"]}, {"sizeBytes": 1367412932, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:8cc8d6a6ccbb6720f1597bc455c67161ce589c2ffeb07664b1f744454777c411"]}, {"sizeBytes": 1367412869, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:85f0dff9fee0427344f992db9105fb41b08b7eb440270d35196f43652bb03f04"]}, {"sizeBytes": 1367412621, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:2d70814b7aa27f2e3997383cb900ae99c33c8fd58cc22fe6039ddc9c33e3a74b"]}, {"sizeBytes": 1367412535, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:840e0426bcade49105d1cfe6ebca240d8fabfc3cbbb9b24f81bd2a0fc8f5ed9b", "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1367412528, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:3d8be53a8e587ecdc87c7ee6c17ce2cc929324c5ed563d9ad37e36209cc0d2c7"]}, {"sizeBytes": 1367412469, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:872fffe48f91dde2070ce3703af7db231ed7ac163a2e3393a8be0b443221f137"]}, {"sizeBytes": 1367386571, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:c164f945eeef8c540c9f05866d336fda6951df54ddf387f7da5202d87c6c2110"]}, {"sizeBytes": 1365058051, "names": ["docker-registry.default.svc:5000/automation-schoenthaler-blu/aidabluworkflows@sha256:918e8988a972053c4ae030bd2202f5a7414d3e2a547e87d2823d93a42a84dcb0"]}, {"sizeBytes": 1365050753, "names": ["docker-registry.default.svc:5000/automation-schoenthaler-blu/aidabluworkflows@sha256:c866f2e6a55e8c2d0508a2b873743fed23eec2b48a63941ac968b5e1f62eca31"]}, {"sizeBytes": 1355421688, "names": ["registry.spdev.net/aidablu/aidabluworkflows@sha256:6e904bba02adb0a244f05349d941f31d175d670fd619f15f1fe83b8449689c2e"]}, {"sizeBytes": 1322155335, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:c9338306320131e753758787028bdee3f8666f4883c8f08acf26d5efd6224a88"]}, {"sizeBytes": 1322120182, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:4b4eb2d1b3e8c77808c9a231d6cac4abb9b53da383eb726f3e06d047b51863ba"]}, {"sizeBytes": 1322086654, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:05b9fc7f53e8b31a0c39553f0ed9e925ee0d8fafb41570f73d40df8cd8f20982"]}, {"sizeBytes": 1322085559, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:c255a8a5027f542ebf2a8233b477ad2a791849b964663aebc5401e044af394b6"]}, {"sizeBytes": 1321996874, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:7fa5bf8894d24dcd456881c13a4308a0c7ac4c453892dfeb777b819452667f41"]}, {"sizeBytes": 1321891170, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:da8202dd027662964e4c97d42288f201d779f3bbae8ac50d6684750e072a27ee"]}, {"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1260401062, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:a452127451507663962c797e27bdf29c35c7ab0a8bc64f0846e02823519758e5"]}, {"sizeBytes": 1260369106, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:d3eff562dd38a600d93e6874f612d97decb9bee9b70a6fe6cc41a211bb3dae28"]}, {"sizeBytes": 1260352849, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:e15e44608b0fdb21fac7d1007506614b51b945c5ec5c3b9e576e744f8ccbe4aa"]}, {"sizeBytes": 1260329003, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:fbbe87a501e72a1c9575515c024ba3dd6f449cb939149bb5e943c23ab7468776"]}, {"sizeBytes": 1246007543, "names": ["docker-registry.default.svc:5000/automation-zimmermann-blu/aidabluworkflows@sha256:47783cce85c7196fbfc69bc6cd2b119862616e6ba0e7d1742bba422a1d43c328", "docker-registry.default.svc:5000/automation-zimmermann-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1241229720, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:3b1b39f4087f471cd58047e6dfc32bcde465c14b8dd5418d80381e428a000083", "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows:v1"]}, {"sizeBytes": 1241092356, "names": ["docker-registry.default.svc:5000/blu-behrens/aidabluworkflows@sha256:b514a9ca9ffb96ed37acf27b14630e1d64768e384831fa07d29dbc1af9b86fba"]}, {"sizeBytes": 1237271903, "names": ["docker-registry.default.svc:5000/automation-rick-blu/aidabluworkflows@sha256:4cb02a2015dd26a1c20e9a4b94ab271368a5fe11611e79c0ded27771d67f9228"]}, {"sizeBytes": 1237236937, "names": ["docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:f903c8e05893254eee5fd1ab7f3535e8aa0238712a27ee35fc32d66370c1973e"]}, {"sizeBytes": 1237236834, "names": ["docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:7ab1f3df42d1e359bcde3e3e9978b10ff8d3f8c2facdfb3efa037d543ddf4a72"]}, {"sizeBytes": 1237236515, "names": ["docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:8e8377f266b4e3e21995667f6c6d6c63db101ed9a935b06ca813c53a8619355b"]}, {"sizeBytes": 1237234244, "names": ["docker-registry.default.svc:5000/aida-1423/aidabluworkflows@sha256:8d45507098026fc3be703b0b0d0def25df58a2a5c86f2b82a2639c99bdd76582"]}, {"sizeBytes": 1237234015, "names": ["docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:fc1c714c181819fc40df7cfea62ed1656ce2dbd150f58aba4ac7f1fb2157e684"]}, {"sizeBytes": 1237234005, "names": ["docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:5f59305cf8edfbd5fa70337d60d8f246714507ae209be9131bcc6fa54f49ec7c", "docker-registry.default.svc:5000/rapp-test/aidabluworkflows:latest"]}, {"sizeBytes": 1237233908, "names": ["docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:2cff5c19a3f6f8c08e22ff0cd6e25b1955e22b4d4f97e8195756e02127a3078d"]}, {"sizeBytes": 1237233603, "names": ["docker-registry.default.svc:5000/aidblu-132/aidabluworkflows@sha256:b8e02d587456638baa28f36fee7cbed81b25f06bfa0047ff16f91ff60ff37436"]}, {"sizeBytes": 1237127618, "names": ["docker-registry.default.svc:5000/automation-rick-blu/aidabluworkflows@sha256:8f4a61e6187a5cb8914638d84b5fbbc0c384541b93b9370514afb05bb32ae638", "docker-registry.default.svc:5000/automation-rick-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1237080616, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:6619697410b73151f131ecf9ffb8ce45626ddf5690739409cb2923e742688e44"]}, {"sizeBytes": 1237080548, "names": ["docker-registry.default.svc:5000/automation-rapp-blu/aidabluworkflows@sha256:7b465f44c0f304a68208bffb8fc87c1ea9638da526c0f981371092c2eedfd520"]}, {"sizeBytes": 1237080436, "names": ["docker-registry.default.svc:5000/automation-ziesel-blu/aidabluworkflows@sha256:3a845810bc2213515fa4af3ac8547846fdbdc75f3e8617e67a890ccdbf21ae26"]}, {"sizeBytes": 1237080414, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:4ac591ec5be9c81a7d94cd80474e5fd2b5d36d587f0db669317a2c38afcb41f3"]}, {"sizeBytes": 1196488450, "names": ["registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0"]}, {"sizeBytes": 1196480016, "names": ["registry.spdev.net/aidablu/mistral@sha256:0e82049a566de0fa322e7298a12dca8e2afc60f42c57ce1749c6d0b1418046c4", "registry.spdev.net/aidablu/mistral:latest"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-11-14T12:45:19Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:51:56Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-14T12:45:19Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:51:56Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-12-20T09:47:34Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:51:56Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "True", "lastTransitionTime": "2019-01-09T14:51:56Z", "reason": "KubeletReady", "lastHeartbeatTime": "2019-01-09T14:51:56Z", "message": "kubelet is posting ready status", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T21:35:14Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:51:56Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node07.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node07.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node07.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871441", "creationTimestamp": "2018-05-14T13:13:33Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node07.os.ad.scanplus.de", "uid": "9a63d9ad-5778-11e8-9cd3-005056aa3492"}}]}}\n', '') ok: [sp-os-node07.os.ad.scanplus.de -> sp-os-master01.os.ad.scanplus.de] => { "attempts": 3, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node07.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node07.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-05-14T13:13:33Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node07.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "region": "primary", "update.group": "odd", "zone": "RZ-LM07" }, "name": "sp-os-node07.os.ad.scanplus.de", "resourceVersion": "93871441", "selfLink": "/api/v1/nodes/sp-os-node07.os.ad.scanplus.de", "uid": "9a63d9ad-5778-11e8-9cd3-005056aa3492" }, "spec": { "externalID": "sp-os-node07.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.81.90", "type": "InternalIP" }, { "address": "sp-os-node07.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147440Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249840Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:51:56Z", "lastTransitionTime": "2018-11-14T12:45:19Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:51:56Z", "lastTransitionTime": "2018-11-14T12:45:19Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:56Z", "lastTransitionTime": "2018-12-20T09:47:34Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:51:56Z", "lastTransitionTime": "2019-01-09T14:51:56Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:51:56Z", "lastTransitionTime": "2018-09-13T21:35:14Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "docker-registry.default.svc:5000/timetest/timecontrol@sha256:7810e74bfa0d0813425a90e8656582c8f030616eb744c586085b538bce8dec4a" ], "sizeBytes": 2112658666 }, { "names": [ "docker-registry.default.svc:5000/automation-rapp-blu/aidabluworkflows@sha256:4e36ea72daef21cf78abce6fbec5f002d505eddb6ce16b06ca24f20b41f4232b" ], "sizeBytes": 1719657821 }, { "names": [ "docker-registry.default.svc:5000/automation-rapp-blu/aidabluworkflows@sha256:bb144132f8bc09ab08aa224ca631e3aa3428e6d278e679e90ca778d89d13a28f", "docker-registry.default.svc:5000/automation-rapp-blu/aidabluworkflows:latest" ], "sizeBytes": 1719654738 }, { "names": [ "docker-registry.default.svc:5000/mhe-blu/aidabluworkflows@sha256:6c3e03bc32b64dc9ec6f663d1c61c8578672b45fa99dcf2904670531d5a366ad" ], "sizeBytes": 1371524286 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:8a9b2f3497798080d790276a063440d619e44efca05ff6472159b20f5481be49" ], "sizeBytes": 1367428353 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:e1742428482eca5b0b0e43ca589d812b35a1920bb9f09c01cf953b0f66ef5a9a" ], "sizeBytes": 1367428248 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:d58fcd36ac381f430007485d4d1d2234e8846c611d3abbfdcda86233721b17bd" ], "sizeBytes": 1367415809 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:daa99d134becfa9b9b1768783ad45c6bca845b29d01ab8a9978bedb17bc5b59e" ], "sizeBytes": 1367415760 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:02e38f5cf4a468064c9f6b4028f5a7abb7ad3d06f53a03f165fb07363631a3b9" ], "sizeBytes": 1367415511 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:8cc8d6a6ccbb6720f1597bc455c67161ce589c2ffeb07664b1f744454777c411" ], "sizeBytes": 1367412932 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:85f0dff9fee0427344f992db9105fb41b08b7eb440270d35196f43652bb03f04" ], "sizeBytes": 1367412869 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:2d70814b7aa27f2e3997383cb900ae99c33c8fd58cc22fe6039ddc9c33e3a74b" ], "sizeBytes": 1367412621 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:840e0426bcade49105d1cfe6ebca240d8fabfc3cbbb9b24f81bd2a0fc8f5ed9b", "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows:latest" ], "sizeBytes": 1367412535 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:3d8be53a8e587ecdc87c7ee6c17ce2cc929324c5ed563d9ad37e36209cc0d2c7" ], "sizeBytes": 1367412528 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:872fffe48f91dde2070ce3703af7db231ed7ac163a2e3393a8be0b443221f137" ], "sizeBytes": 1367412469 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:c164f945eeef8c540c9f05866d336fda6951df54ddf387f7da5202d87c6c2110" ], "sizeBytes": 1367386571 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler-blu/aidabluworkflows@sha256:918e8988a972053c4ae030bd2202f5a7414d3e2a547e87d2823d93a42a84dcb0" ], "sizeBytes": 1365058051 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler-blu/aidabluworkflows@sha256:c866f2e6a55e8c2d0508a2b873743fed23eec2b48a63941ac968b5e1f62eca31" ], "sizeBytes": 1365050753 }, { "names": [ "registry.spdev.net/aidablu/aidabluworkflows@sha256:6e904bba02adb0a244f05349d941f31d175d670fd619f15f1fe83b8449689c2e" ], "sizeBytes": 1355421688 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:c9338306320131e753758787028bdee3f8666f4883c8f08acf26d5efd6224a88" ], "sizeBytes": 1322155335 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:4b4eb2d1b3e8c77808c9a231d6cac4abb9b53da383eb726f3e06d047b51863ba" ], "sizeBytes": 1322120182 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:05b9fc7f53e8b31a0c39553f0ed9e925ee0d8fafb41570f73d40df8cd8f20982" ], "sizeBytes": 1322086654 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:c255a8a5027f542ebf2a8233b477ad2a791849b964663aebc5401e044af394b6" ], "sizeBytes": 1322085559 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:7fa5bf8894d24dcd456881c13a4308a0c7ac4c453892dfeb777b819452667f41" ], "sizeBytes": 1321996874 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:da8202dd027662964e4c97d42288f201d779f3bbae8ac50d6684750e072a27ee" ], "sizeBytes": 1321891170 }, { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:a452127451507663962c797e27bdf29c35c7ab0a8bc64f0846e02823519758e5" ], "sizeBytes": 1260401062 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:d3eff562dd38a600d93e6874f612d97decb9bee9b70a6fe6cc41a211bb3dae28" ], "sizeBytes": 1260369106 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:e15e44608b0fdb21fac7d1007506614b51b945c5ec5c3b9e576e744f8ccbe4aa" ], "sizeBytes": 1260352849 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:fbbe87a501e72a1c9575515c024ba3dd6f449cb939149bb5e943c23ab7468776" ], "sizeBytes": 1260329003 }, { "names": [ "docker-registry.default.svc:5000/automation-zimmermann-blu/aidabluworkflows@sha256:47783cce85c7196fbfc69bc6cd2b119862616e6ba0e7d1742bba422a1d43c328", "docker-registry.default.svc:5000/automation-zimmermann-blu/aidabluworkflows:latest" ], "sizeBytes": 1246007543 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:3b1b39f4087f471cd58047e6dfc32bcde465c14b8dd5418d80381e428a000083", "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows:v1" ], "sizeBytes": 1241229720 }, { "names": [ "docker-registry.default.svc:5000/blu-behrens/aidabluworkflows@sha256:b514a9ca9ffb96ed37acf27b14630e1d64768e384831fa07d29dbc1af9b86fba" ], "sizeBytes": 1241092356 }, { "names": [ "docker-registry.default.svc:5000/automation-rick-blu/aidabluworkflows@sha256:4cb02a2015dd26a1c20e9a4b94ab271368a5fe11611e79c0ded27771d67f9228" ], "sizeBytes": 1237271903 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:f903c8e05893254eee5fd1ab7f3535e8aa0238712a27ee35fc32d66370c1973e" ], "sizeBytes": 1237236937 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:7ab1f3df42d1e359bcde3e3e9978b10ff8d3f8c2facdfb3efa037d543ddf4a72" ], "sizeBytes": 1237236834 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:8e8377f266b4e3e21995667f6c6d6c63db101ed9a935b06ca813c53a8619355b" ], "sizeBytes": 1237236515 }, { "names": [ "docker-registry.default.svc:5000/aida-1423/aidabluworkflows@sha256:8d45507098026fc3be703b0b0d0def25df58a2a5c86f2b82a2639c99bdd76582" ], "sizeBytes": 1237234244 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:fc1c714c181819fc40df7cfea62ed1656ce2dbd150f58aba4ac7f1fb2157e684" ], "sizeBytes": 1237234015 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:5f59305cf8edfbd5fa70337d60d8f246714507ae209be9131bcc6fa54f49ec7c", "docker-registry.default.svc:5000/rapp-test/aidabluworkflows:latest" ], "sizeBytes": 1237234005 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:2cff5c19a3f6f8c08e22ff0cd6e25b1955e22b4d4f97e8195756e02127a3078d" ], "sizeBytes": 1237233908 }, { "names": [ "docker-registry.default.svc:5000/aidblu-132/aidabluworkflows@sha256:b8e02d587456638baa28f36fee7cbed81b25f06bfa0047ff16f91ff60ff37436" ], "sizeBytes": 1237233603 }, { "names": [ "docker-registry.default.svc:5000/automation-rick-blu/aidabluworkflows@sha256:8f4a61e6187a5cb8914638d84b5fbbc0c384541b93b9370514afb05bb32ae638", "docker-registry.default.svc:5000/automation-rick-blu/aidabluworkflows:latest" ], "sizeBytes": 1237127618 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:6619697410b73151f131ecf9ffb8ce45626ddf5690739409cb2923e742688e44" ], "sizeBytes": 1237080616 }, { "names": [ "docker-registry.default.svc:5000/automation-rapp-blu/aidabluworkflows@sha256:7b465f44c0f304a68208bffb8fc87c1ea9638da526c0f981371092c2eedfd520" ], "sizeBytes": 1237080548 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel-blu/aidabluworkflows@sha256:3a845810bc2213515fa4af3ac8547846fdbdc75f3e8617e67a890ccdbf21ae26" ], "sizeBytes": 1237080436 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:4ac591ec5be9c81a7d94cd80474e5fd2b5d36d587f0db669317a2c38afcb41f3" ], "sizeBytes": 1237080414 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0" ], "sizeBytes": 1196488450 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:0e82049a566de0fa322e7298a12dca8e2afc60f42c57ce1749c6d0b1418046c4", "registry.spdev.net/aidablu/mistral:latest" ], "sizeBytes": 1196480016 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 } ], "nodeInfo": { "architecture": "amd64", "bootID": "c8322728-b7df-42b4-8a87-aee1d595710d", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422AA3AF-4194-66BE-734A-E269F11EDB55" } } } ], "returncode": 0 }, "state": "list" } META: ran handlers META: ran handlers PLAY [Restart nodes] ******************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [restart node] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:32 Wednesday 09 January 2019 15:51:58 +0100 (0:00:11.797) 0:12:32.617 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node08.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TimeoutStopUSec": "1min 30s", "ControlGroup": "/system.slice/atomic-openshift-node.service", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ActiveExitTimestamp": "Wed 2019-01-09 14:52:58 CET", "ExecMainCode": "0", "UnitFileState": "enabled", "ExecMainPID": "34894", "LimitSIGPENDING": "63382", "FileDescriptorStoreMax": "0", "LoadState": "loaded", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "10162143909977", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "10162143910091", "StandardError": "inherit", "AssertTimestamp": "Wed 2019-01-09 14:52:58 CET", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "InactiveExitTimestamp": "Wed 2019-01-09 14:52:58 CET", "WatchdogTimestamp": "Wed 2019-01-09 14:52:59 CET", "NoNewPrivileges": "no", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "Before": "shutdown.target multi-user.target", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "10162142886226", "SendSIGHUP": "no", "TimeoutStartUSec": "5min", "Type": "notify", "SyslogPriority": "30", "SameProcessGroup": "no", "MountFlags": "0", "LimitNPROC": "63382", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "ExecMainStartTimestamp": "Wed 2019-01-09 14:52:58 CET", "SyslogIdentifier": "atomic-openshift-node", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "-999", "Documentation": "https://github.com/openshift/origin", "StartLimitBurst": "5", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "SecureBits": "0", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "running", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestampMonotonic": "10162142884123", "MainPID": "34894", "StartupBlockIOWeight": "18446744073709551615", "ActiveEnterTimestamp": "Wed 2019-01-09 14:52:59 CET", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "active", "Nice": "0", "LimitDATA": "18446744073709551615", "UnitFilePreset": "disabled", "MemoryCurrent": "105472000", "LimitRTTIME": "18446744073709551615", "WantedBy": "multi-user.target", "TasksCurrent": "18446744073709551615", "RestartUSec": "5s", "ConditionTimestamp": "Wed 2019-01-09 14:52:58 CET", "CPUAccounting": "yes", "RemainAfterExit": "no", "RequiresMountsFor": "/var/lib/origin", "PrivateNetwork": "no", "Restart": "always", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "RefuseManualStop": "no", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "InactiveEnterTimestamp": "Wed 2019-01-09 14:52:58 CET", "StandardInput": "null", "AssertTimestampMonotonic": "10162142884124", "DefaultDependencies": "yes", "Requires": "-.mount var.mount basic.target", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "Slice": "system.slice", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "main", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "PrivateTmp": "no", "OnFailureJobMode": "replace", "AssertResult": "yes", "LimitLOCKS": "18446744073709551615", "ExecMainStartTimestampMonotonic": "10162142886163", "AllowIsolate": "no", "Wants": "docker.service dnsmasq.service system.slice", "After": "systemd-journald.socket dnsmasq.service -.mount ntpd.service docker.service system.slice var.mount chronyd.service basic.target", "FailureAction": "none", "CanIsolate": "no", "Conflicts": "shutdown.target", "StandardOutput": "journal", "WorkingDirectory": "/var/lib/origin", "InactiveEnterTimestampMonotonic": "10162142857337", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "Transient": "no", "IOScheduling": "0", "Description": "OpenShift Node", "ActiveExitTimestampMonotonic": "10162142836285", "CanReload": "no", "ControlPID": "0", "LimitNICE": "0", "BlockIOWeight": "18446744073709551615", "Names": "atomic-openshift-node.service", "ProtectSystem": "no", "PrivateDevices": "no", "Id": "atomic-openshift-node.service"}, "invocation": {"module_args": {"daemon-reload": true, "force": null, "name": "atomic-openshift-node", "enabled": null, "daemon_reload": true, "state": "restarted", "no_block": false, "user": false, "masked": null}}, "state": "started", "changed": true, "name": "atomic-openshift-node"}\n', '') changed: [sp-os-node08.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "daemon-reload": true, "daemon_reload": true, "enabled": null, "force": null, "masked": null, "name": "atomic-openshift-node", "no_block": false, "state": "restarted", "user": false } }, "name": "atomic-openshift-node", "state": "started", "status": { "ActiveEnterTimestamp": "Wed 2019-01-09 14:52:59 CET", "ActiveEnterTimestampMonotonic": "10162143910091", "ActiveExitTimestamp": "Wed 2019-01-09 14:52:58 CET", "ActiveExitTimestampMonotonic": "10162142836285", "ActiveState": "active", "After": "systemd-journald.socket dnsmasq.service -.mount ntpd.service docker.service system.slice var.mount chronyd.service basic.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-01-09 14:52:58 CET", "AssertTimestampMonotonic": "10162142884124", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-01-09 14:52:58 CET", "ConditionTimestampMonotonic": "10162142884123", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/atomic-openshift-node.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "OpenShift Node", "DevicePolicy": "auto", "Documentation": "https://github.com/openshift/origin", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "34894", "ExecMainStartTimestamp": "Wed 2019-01-09 14:52:58 CET", "ExecMainStartTimestampMonotonic": "10162142886163", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "atomic-openshift-node.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-01-09 14:52:58 CET", "InactiveEnterTimestampMonotonic": "10162142857337", "InactiveExitTimestamp": "Wed 2019-01-09 14:52:58 CET", "InactiveExitTimestampMonotonic": "10162142886226", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "63382", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "63382", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "34894", "MemoryAccounting": "yes", "MemoryCurrent": "105472000", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "atomic-openshift-node.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "-999", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "-.mount var.mount basic.target", "RequiresMountsFor": "/var/lib/origin", "Restart": "always", "RestartUSec": "5s", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogIdentifier": "atomic-openshift-node", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker.service dnsmasq.service system.slice", "WatchdogTimestamp": "Wed 2019-01-09 14:52:59 CET", "WatchdogTimestampMonotonic": "10162143909977", "WatchdogUSec": "0", "WorkingDirectory": "/var/lib/origin" } } TASK [Wait for node to be ready] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:38 Wednesday 09 January 2019 15:52:00 +0100 (0:00:01.950) 0:12:34.567 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node08.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node08.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.81.91"}, {"type": "Hostname", "address": "sp-os-node08.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "40912309-94a5-46e0-8542-26f4806b9914", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422A0783-7155-AC41-2292-5EE89A8FF6FB", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-11-09T19:02:07Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:52:00Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-09T19:02:07Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:52:00Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T10:21:03Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:52:00Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:52:00Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:52:00Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:43:35Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:52:00Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node08.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node08.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node08.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871463", "creationTimestamp": "2018-05-14T13:13:33Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node08.os.ad.scanplus.de", "uid": "9a5315e4-5778-11e8-9cd3-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (36 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node08.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node08.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-05-14T13:13:33Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node08.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "region": "primary", "update.group": "even", "zone": "RZ-LM07" }, "name": "sp-os-node08.os.ad.scanplus.de", "resourceVersion": "93871463", "selfLink": "/api/v1/nodes/sp-os-node08.os.ad.scanplus.de", "uid": "9a5315e4-5778-11e8-9cd3-005056aa3492" }, "spec": { "externalID": "sp-os-node08.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.81.91", "type": "InternalIP" }, { "address": "sp-os-node08.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:52:00Z", "lastTransitionTime": "2018-11-09T19:02:07Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:52:00Z", "lastTransitionTime": "2018-11-09T19:02:07Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:00Z", "lastTransitionTime": "2019-01-09T10:21:03Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:00Z", "lastTransitionTime": "2019-01-09T14:52:00Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:52:00Z", "lastTransitionTime": "2018-09-13T22:43:35Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "40912309-94a5-46e0-8542-26f4806b9914", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422A0783-7155-AC41-2292-5EE89A8FF6FB" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node08.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node08.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.81.91"}, {"type": "Hostname", "address": "sp-os-node08.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "40912309-94a5-46e0-8542-26f4806b9914", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422A0783-7155-AC41-2292-5EE89A8FF6FB", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-11-09T19:02:07Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:52:00Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-09T19:02:07Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:52:00Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T10:21:03Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:52:00Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:52:00Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:52:00Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:43:35Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:52:00Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node08.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node08.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node08.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871463", "creationTimestamp": "2018-05-14T13:13:33Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node08.os.ad.scanplus.de", "uid": "9a5315e4-5778-11e8-9cd3-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (35 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node08.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node08.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-05-14T13:13:33Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node08.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "region": "primary", "update.group": "even", "zone": "RZ-LM07" }, "name": "sp-os-node08.os.ad.scanplus.de", "resourceVersion": "93871463", "selfLink": "/api/v1/nodes/sp-os-node08.os.ad.scanplus.de", "uid": "9a5315e4-5778-11e8-9cd3-005056aa3492" }, "spec": { "externalID": "sp-os-node08.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.81.91", "type": "InternalIP" }, { "address": "sp-os-node08.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:52:00Z", "lastTransitionTime": "2018-11-09T19:02:07Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:52:00Z", "lastTransitionTime": "2018-11-09T19:02:07Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:00Z", "lastTransitionTime": "2019-01-09T10:21:03Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:00Z", "lastTransitionTime": "2019-01-09T14:52:00Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:52:00Z", "lastTransitionTime": "2018-09-13T22:43:35Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "40912309-94a5-46e0-8542-26f4806b9914", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422A0783-7155-AC41-2292-5EE89A8FF6FB" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node08.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node08.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249848Ki"}, "addresses": [{"type": "InternalIP", "address": "172.30.81.91"}, {"type": "Hostname", "address": "sp-os-node08.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "40912309-94a5-46e0-8542-26f4806b9914", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "422A0783-7155-AC41-2292-5EE89A8FF6FB", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147448Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1719654738, "names": ["docker-registry.default.svc:5000/automation-rapp-blu/aidabluworkflows@sha256:bb144132f8bc09ab08aa224ca631e3aa3428e6d278e679e90ca778d89d13a28f"]}, {"sizeBytes": 1367428353, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:8a9b2f3497798080d790276a063440d619e44efca05ff6472159b20f5481be49", "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1367428248, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:e1742428482eca5b0b0e43ca589d812b35a1920bb9f09c01cf953b0f66ef5a9a"]}, {"sizeBytes": 1367415809, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:d58fcd36ac381f430007485d4d1d2234e8846c611d3abbfdcda86233721b17bd"]}, {"sizeBytes": 1367415760, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:daa99d134becfa9b9b1768783ad45c6bca845b29d01ab8a9978bedb17bc5b59e"]}, {"sizeBytes": 1367415696, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:fccae76dabfcc36da4430c6585f8365a5d1f58e93c7aae77a5f12a4b0dde4eaf"]}, {"sizeBytes": 1365058051, "names": ["docker-registry.default.svc:5000/automation-schoenthaler-blu/aidabluworkflows@sha256:918e8988a972053c4ae030bd2202f5a7414d3e2a547e87d2823d93a42a84dcb0", "docker-registry.default.svc:5000/automation-schoenthaler-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1322120182, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:4b4eb2d1b3e8c77808c9a231d6cac4abb9b53da383eb726f3e06d047b51863ba", "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows:v1"]}, {"sizeBytes": 1322086654, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:05b9fc7f53e8b31a0c39553f0ed9e925ee0d8fafb41570f73d40df8cd8f20982"]}, {"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1260369106, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:d3eff562dd38a600d93e6874f612d97decb9bee9b70a6fe6cc41a211bb3dae28", "docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1260352849, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:e15e44608b0fdb21fac7d1007506614b51b945c5ec5c3b9e576e744f8ccbe4aa"]}, {"sizeBytes": 1260329003, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:fbbe87a501e72a1c9575515c024ba3dd6f449cb939149bb5e943c23ab7468776"]}, {"sizeBytes": 1237268824, "names": ["docker-registry.default.svc:5000/automation-rick-blu/aidabluworkflows@sha256:c3e853c401b780c90cdf2e6edb18873cc51334d36097e930be1682abff0749c2"]}, {"sizeBytes": 1237234244, "names": ["docker-registry.default.svc:5000/aida-1423/aidabluworkflows@sha256:8d45507098026fc3be703b0b0d0def25df58a2a5c86f2b82a2639c99bdd76582", "docker-registry.default.svc:5000/aida-1423/aidabluworkflows:latest"]}, {"sizeBytes": 1237234015, "names": ["docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:fc1c714c181819fc40df7cfea62ed1656ce2dbd150f58aba4ac7f1fb2157e684"]}, {"sizeBytes": 1237080777, "names": ["docker-registry.default.svc:5000/automation-ziesel-blu/aidabluworkflows@sha256:04ae8faf03f4ceea2cc6148d8e4230da5f335dcfa614e3579c5097fb5477d179"]}, {"sizeBytes": 1196560257, "names": ["registry.spdev.net/aidablu/mistral@sha256:425750be114a61cabf4b58e9be68745c672d2561260cb84c82a0646ebf1f34c1", "registry.spdev.net/aidablu/mistral:latest"]}, {"sizeBytes": 1196488450, "names": ["registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 1023247578, "names": ["docker-registry.default.svc:5000/automation-gleim/autopython35_taggingclient@sha256:04eb5cc2da0e52f48c0d837d863696f1e873fbc48c1eb47f433771542ed637c9", "docker-registry.default.svc:5000/automation-gleim/autopython35_taggingclient:latest"]}, {"sizeBytes": 929099446, "names": ["docker-registry.default.svc:5000/automation-gleim/autopython35@sha256:494c0a3a076ea3e3d0219e9c17fe6c088406b36deb2083d93e83829504f8da91", "docker-registry.default.svc:5000/automation-gleim/autopython35:latest"]}, {"sizeBytes": 873644081, "names": ["docker-registry.default.svc:5000/sdi-openshift/aida-portal@sha256:c51101b3549da018fd9de1d7eceaef80e180ab3dd2cdf8796eaaf6e30559385d", "docker-registry.default.svc:5000/sdi-openshift/aida-portal:develop"]}, {"sizeBytes": 848662625, "names": ["docker-registry.default.svc:5000/syi-test/ec-portal@sha256:246abd2b09e7c45ed993266f288846be8d6b49ec5c0e0f36bd5a94a33ca2a5ed"]}, {"sizeBytes": 848591556, "names": ["docker-registry.default.svc:5000/syi-test/ec-portal@sha256:594aefcd125d9bf57f034fec1d5738c395722501f32e2a83b27cf51d6ce5b081", "docker-registry.default.svc:5000/syi-test/ec-portal:latest"]}, {"sizeBytes": 848591512, "names": ["docker-registry.default.svc:5000/syi-test/ec-portal@sha256:ead4b74df73367479451bd0a511d99a2387a5f36412faaa2a3b7e3003e5d0adb"]}, {"sizeBytes": 822686028, "names": ["docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:d0f2f3c44139144f99934560ac8f7de1ba538b3c5c17803fa7af0106cb042f0e"]}, {"sizeBytes": 822681927, "names": ["docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:da2c96af593ed26ab0a862652e453076ae7cc86539482b04c7cf4c31bb939868", "docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu:develop"]}, {"sizeBytes": 783548789, "names": ["docker-registry.default.svc:5000/rapp-test/aida-blu@sha256:049594cf2dedc76f2e815ad53cc8a80a3e3f5682add0a590d661b731357a3e57", "docker-registry.default.svc:5000/rapp-test/aida-blu:latest"]}, {"sizeBytes": 783548620, "names": ["docker-registry.default.svc:5000/rapp-test/aida-blu@sha256:947efdb9be6ed1030b3f67878f2d5f696a0658f073a5a0ee28bf8c2cdc2bf517"]}, {"sizeBytes": 783548603, "names": ["docker-registry.default.svc:5000/rapp-test/aida-blu@sha256:0562a96a846115bbd6e6601b277d0f917cd5a685711d8e9fb6ef67082d3922d0"]}, {"sizeBytes": 781183834, "names": ["docker-registry.default.svc:5000/automation-puscasu-blu/aida-blu@sha256:d4abeba247c5a19b3b7bf000f1e2b46dfeb8cdea6cb67931356ae65b1e1b49a9", "docker-registry.default.svc:5000/automation-puscasu-blu/aida-blu:develop"]}, {"sizeBytes": 773441212, "names": ["docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:04484e46e9ae1f027c16402ccc01f630f6a4560bd4bec27ac443a85b2460d50e"]}, {"sizeBytes": 773428961, "names": ["docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:d9714a135f7bc9fe05187ff44caae67bca14d9648a14f83354dfca5055423a98"]}, {"sizeBytes": 769141701, "names": ["docker-registry.default.svc:5000/sp-base/blu-python@sha256:e7333e11bd486d5515b069f1a60a6ac71ec2faa752196949c88f74f275927ab3", "docker-registry.default.svc:5000/sp-base/blu-python:latest"]}, {"sizeBytes": 769122552, "names": ["docker-registry.default.svc:5000/sp-base/blu-python@sha256:94126142cfa75cf47ed69fa3c0ecb81b08d7ae29d89c143c7777b134528876df"]}, {"sizeBytes": 706339178, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aida-blu@sha256:3ba31a778b1d6c4b74ac5ae1ff8d03bd6da85e084e1f57f9584dcc4fbbd69e57", "docker-registry.default.svc:5000/automation-haertenstein-blu/aida-blu:develop"]}, {"sizeBytes": 706282746, "names": ["docker-registry.default.svc:5000/automation-haertenstein-blu/aida-blu@sha256:3a886e2b68a6c4380def94b6d612e8dbb17874bef6ddc0a59f864ac7ff1e13ad"]}, {"sizeBytes": 701148931, "names": ["docker-registry.default.svc:5000/automation-gleim-blu/aida-blu@sha256:1bfa38e361910c7b8cdb954e78dd89325aab4bbf5112720b3a00834c86fbb287", "docker-registry.default.svc:5000/automation-gleim-blu/aida-blu:develop"]}, {"sizeBytes": 700250319, "names": ["docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:b8774eac5a896281dfcd7c978678dd94a3771fe546b922273d0609f3d92564a9"]}, {"sizeBytes": 699582068, "names": ["docker-registry.default.svc:5000/automation-rick-blu/aida-blu@sha256:6892a86d6a8e16edf9635b3e2fa858c2e5585aec754fcada652afd1b4d22996e", "docker-registry.default.svc:5000/automation-rick-blu/aida-blu:develop"]}, {"sizeBytes": 685991088, "names": ["docker.io/centos/python-36-centos7@sha256:091d56e3ab03d52ef0ffac4b88e7e1fa24ea0243bfd05297882c12ff8a0ba1df", "docker.io/centos/python-36-centos7:latest"]}, {"sizeBytes": 683822752, "names": ["docker-registry.default.svc:5000/sp-base/basepython@sha256:d14894cc2849ad972696397c4ab463f63ae2fa1fdb5a96f6239b48d54ca4533a"]}, {"sizeBytes": 683822752, "names": ["docker-registry.default.svc:5000/test-rapp-blu/basepython@sha256:4c8cebcfa445cfa0f7d5f1fbc53042b4f6035b2cad2479f07d0b5186c34f3542", "docker-registry.default.svc:5000/test-rapp-blu/basepython:latest"]}, {"sizeBytes": 683822752, "names": ["docker-registry.default.svc:5000/sp-base/basepython@sha256:0294a6981acb5a51ca7d0447337019f645d13fdaf67f7e7d9d47a7c0ccf58d96", "docker-registry.default.svc:5000/sp-base/basepython:latest"]}, {"sizeBytes": 683822752, "names": ["docker-registry.default.svc:5000/test-rapp/basepython@sha256:c43503c10de9b75df55bbd2a5770a072441a0ed0a6ecc5f9e98f9299602e41fe", "docker-registry.default.svc:5000/test-rapp/basepython:latest"]}, {"sizeBytes": 680711248, "names": ["docker.io/centos/python-35-centos7@sha256:314cc72c9090e6d893c6371e239d6f632fb07fd971ffc5df39ef542b9da54c30", "docker.io/centos/python-35-centos7:latest"]}, {"sizeBytes": 644972333, "names": ["docker-registry.default.svc:5000/automation-basisprod/autopython35@sha256:a28aaaf3fb0217fb8696919e2e92ec0aae0e5041b1f0e22cf0bf9b8470af7acc", "docker-registry.default.svc:5000/automation-test2/autopython35@sha256:a28aaaf3fb0217fb8696919e2e92ec0aae0e5041b1f0e22cf0bf9b8470af7acc"]}, {"sizeBytes": 629245986, "names": ["docker-registry.default.svc:5000/automation-heine/autopython35_taggingclient@sha256:312de12001785159cc7f48016f021030c185290928d726d3792260531983e24b", "docker-registry.default.svc:5000/automation-heine1/autopython35_taggingclient@sha256:312de12001785159cc7f48016f021030c185290928d726d3792260531983e24b"]}, {"sizeBytes": 629245558, "names": ["docker-registry.default.svc:5000/automation-heine-blu/autopython35@sha256:61b94468c160728b3003b6503eff42d6d232ad7b8ed54175a0a81e45bebe4d86", "docker-registry.default.svc:5000/automation-heine/autopython35@sha256:61b94468c160728b3003b6503eff42d6d232ad7b8ed54175a0a81e45bebe4d86", "docker-registry.default.svc:5000/automation-heine1/autopython35@sha256:61b94468c160728b3003b6503eff42d6d232ad7b8ed54175a0a81e45bebe4d86"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-11-09T19:02:07Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:52:10Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-11-09T19:02:07Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:52:10Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T10:21:03Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:52:10Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "True", "lastTransitionTime": "2019-01-09T14:52:10Z", "reason": "KubeletReady", "lastHeartbeatTime": "2019-01-09T14:52:10Z", "message": "kubelet is posting ready status", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:43:35Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:52:10Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node08.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node08.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-LM07", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node08.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871531", "creationTimestamp": "2018-05-14T13:13:33Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node08.os.ad.scanplus.de", "uid": "9a5315e4-5778-11e8-9cd3-005056aa3492"}}]}}\n', '') ok: [sp-os-node08.os.ad.scanplus.de -> sp-os-master01.os.ad.scanplus.de] => { "attempts": 3, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node08.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node08.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-05-14T13:13:33Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node08.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "dev", "region": "primary", "update.group": "even", "zone": "RZ-LM07" }, "name": "sp-os-node08.os.ad.scanplus.de", "resourceVersion": "93871531", "selfLink": "/api/v1/nodes/sp-os-node08.os.ad.scanplus.de", "uid": "9a5315e4-5778-11e8-9cd3-005056aa3492" }, "spec": { "externalID": "sp-os-node08.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.30.81.91", "type": "InternalIP" }, { "address": "sp-os-node08.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147448Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249848Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:52:10Z", "lastTransitionTime": "2018-11-09T19:02:07Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:52:10Z", "lastTransitionTime": "2018-11-09T19:02:07Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:10Z", "lastTransitionTime": "2019-01-09T10:21:03Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:10Z", "lastTransitionTime": "2019-01-09T14:52:10Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:52:10Z", "lastTransitionTime": "2018-09-13T22:43:35Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "docker-registry.default.svc:5000/automation-rapp-blu/aidabluworkflows@sha256:bb144132f8bc09ab08aa224ca631e3aa3428e6d278e679e90ca778d89d13a28f" ], "sizeBytes": 1719654738 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:8a9b2f3497798080d790276a063440d619e44efca05ff6472159b20f5481be49", "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows:latest" ], "sizeBytes": 1367428353 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:e1742428482eca5b0b0e43ca589d812b35a1920bb9f09c01cf953b0f66ef5a9a" ], "sizeBytes": 1367428248 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:d58fcd36ac381f430007485d4d1d2234e8846c611d3abbfdcda86233721b17bd" ], "sizeBytes": 1367415809 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:daa99d134becfa9b9b1768783ad45c6bca845b29d01ab8a9978bedb17bc5b59e" ], "sizeBytes": 1367415760 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aidabluworkflows@sha256:fccae76dabfcc36da4430c6585f8365a5d1f58e93c7aae77a5f12a4b0dde4eaf" ], "sizeBytes": 1367415696 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler-blu/aidabluworkflows@sha256:918e8988a972053c4ae030bd2202f5a7414d3e2a547e87d2823d93a42a84dcb0", "docker-registry.default.svc:5000/automation-schoenthaler-blu/aidabluworkflows:latest" ], "sizeBytes": 1365058051 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:4b4eb2d1b3e8c77808c9a231d6cac4abb9b53da383eb726f3e06d047b51863ba", "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows:v1" ], "sizeBytes": 1322120182 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aidabluworkflows@sha256:05b9fc7f53e8b31a0c39553f0ed9e925ee0d8fafb41570f73d40df8cd8f20982" ], "sizeBytes": 1322086654 }, { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:d3eff562dd38a600d93e6874f612d97decb9bee9b70a6fe6cc41a211bb3dae28", "docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows:latest" ], "sizeBytes": 1260369106 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:e15e44608b0fdb21fac7d1007506614b51b945c5ec5c3b9e576e744f8ccbe4aa" ], "sizeBytes": 1260352849 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aidabluworkflows@sha256:fbbe87a501e72a1c9575515c024ba3dd6f449cb939149bb5e943c23ab7468776" ], "sizeBytes": 1260329003 }, { "names": [ "docker-registry.default.svc:5000/automation-rick-blu/aidabluworkflows@sha256:c3e853c401b780c90cdf2e6edb18873cc51334d36097e930be1682abff0749c2" ], "sizeBytes": 1237268824 }, { "names": [ "docker-registry.default.svc:5000/aida-1423/aidabluworkflows@sha256:8d45507098026fc3be703b0b0d0def25df58a2a5c86f2b82a2639c99bdd76582", "docker-registry.default.svc:5000/aida-1423/aidabluworkflows:latest" ], "sizeBytes": 1237234244 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aidabluworkflows@sha256:fc1c714c181819fc40df7cfea62ed1656ce2dbd150f58aba4ac7f1fb2157e684" ], "sizeBytes": 1237234015 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel-blu/aidabluworkflows@sha256:04ae8faf03f4ceea2cc6148d8e4230da5f335dcfa614e3579c5097fb5477d179" ], "sizeBytes": 1237080777 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:425750be114a61cabf4b58e9be68745c672d2561260cb84c82a0646ebf1f34c1", "registry.spdev.net/aidablu/mistral:latest" ], "sizeBytes": 1196560257 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0" ], "sizeBytes": 1196488450 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/autopython35_taggingclient@sha256:04eb5cc2da0e52f48c0d837d863696f1e873fbc48c1eb47f433771542ed637c9", "docker-registry.default.svc:5000/automation-gleim/autopython35_taggingclient:latest" ], "sizeBytes": 1023247578 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/autopython35@sha256:494c0a3a076ea3e3d0219e9c17fe6c088406b36deb2083d93e83829504f8da91", "docker-registry.default.svc:5000/automation-gleim/autopython35:latest" ], "sizeBytes": 929099446 }, { "names": [ "docker-registry.default.svc:5000/sdi-openshift/aida-portal@sha256:c51101b3549da018fd9de1d7eceaef80e180ab3dd2cdf8796eaaf6e30559385d", "docker-registry.default.svc:5000/sdi-openshift/aida-portal:develop" ], "sizeBytes": 873644081 }, { "names": [ "docker-registry.default.svc:5000/syi-test/ec-portal@sha256:246abd2b09e7c45ed993266f288846be8d6b49ec5c0e0f36bd5a94a33ca2a5ed" ], "sizeBytes": 848662625 }, { "names": [ "docker-registry.default.svc:5000/syi-test/ec-portal@sha256:594aefcd125d9bf57f034fec1d5738c395722501f32e2a83b27cf51d6ce5b081", "docker-registry.default.svc:5000/syi-test/ec-portal:latest" ], "sizeBytes": 848591556 }, { "names": [ "docker-registry.default.svc:5000/syi-test/ec-portal@sha256:ead4b74df73367479451bd0a511d99a2387a5f36412faaa2a3b7e3003e5d0adb" ], "sizeBytes": 848591512 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:d0f2f3c44139144f99934560ac8f7de1ba538b3c5c17803fa7af0106cb042f0e" ], "sizeBytes": 822686028 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:da2c96af593ed26ab0a862652e453076ae7cc86539482b04c7cf4c31bb939868", "docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu:develop" ], "sizeBytes": 822681927 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aida-blu@sha256:049594cf2dedc76f2e815ad53cc8a80a3e3f5682add0a590d661b731357a3e57", "docker-registry.default.svc:5000/rapp-test/aida-blu:latest" ], "sizeBytes": 783548789 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aida-blu@sha256:947efdb9be6ed1030b3f67878f2d5f696a0658f073a5a0ee28bf8c2cdc2bf517" ], "sizeBytes": 783548620 }, { "names": [ "docker-registry.default.svc:5000/rapp-test/aida-blu@sha256:0562a96a846115bbd6e6601b277d0f917cd5a685711d8e9fb6ef67082d3922d0" ], "sizeBytes": 783548603 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu-blu/aida-blu@sha256:d4abeba247c5a19b3b7bf000f1e2b46dfeb8cdea6cb67931356ae65b1e1b49a9", "docker-registry.default.svc:5000/automation-puscasu-blu/aida-blu:develop" ], "sizeBytes": 781183834 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:04484e46e9ae1f027c16402ccc01f630f6a4560bd4bec27ac443a85b2460d50e" ], "sizeBytes": 773441212 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:d9714a135f7bc9fe05187ff44caae67bca14d9648a14f83354dfca5055423a98" ], "sizeBytes": 773428961 }, { "names": [ "docker-registry.default.svc:5000/sp-base/blu-python@sha256:e7333e11bd486d5515b069f1a60a6ac71ec2faa752196949c88f74f275927ab3", "docker-registry.default.svc:5000/sp-base/blu-python:latest" ], "sizeBytes": 769141701 }, { "names": [ "docker-registry.default.svc:5000/sp-base/blu-python@sha256:94126142cfa75cf47ed69fa3c0ecb81b08d7ae29d89c143c7777b134528876df" ], "sizeBytes": 769122552 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aida-blu@sha256:3ba31a778b1d6c4b74ac5ae1ff8d03bd6da85e084e1f57f9584dcc4fbbd69e57", "docker-registry.default.svc:5000/automation-haertenstein-blu/aida-blu:develop" ], "sizeBytes": 706339178 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein-blu/aida-blu@sha256:3a886e2b68a6c4380def94b6d612e8dbb17874bef6ddc0a59f864ac7ff1e13ad" ], "sizeBytes": 706282746 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim-blu/aida-blu@sha256:1bfa38e361910c7b8cdb954e78dd89325aab4bbf5112720b3a00834c86fbb287", "docker-registry.default.svc:5000/automation-gleim-blu/aida-blu:develop" ], "sizeBytes": 701148931 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel-blu/aida-blu@sha256:b8774eac5a896281dfcd7c978678dd94a3771fe546b922273d0609f3d92564a9" ], "sizeBytes": 700250319 }, { "names": [ "docker-registry.default.svc:5000/automation-rick-blu/aida-blu@sha256:6892a86d6a8e16edf9635b3e2fa858c2e5585aec754fcada652afd1b4d22996e", "docker-registry.default.svc:5000/automation-rick-blu/aida-blu:develop" ], "sizeBytes": 699582068 }, { "names": [ "docker.io/centos/python-36-centos7@sha256:091d56e3ab03d52ef0ffac4b88e7e1fa24ea0243bfd05297882c12ff8a0ba1df", "docker.io/centos/python-36-centos7:latest" ], "sizeBytes": 685991088 }, { "names": [ "docker-registry.default.svc:5000/sp-base/basepython@sha256:d14894cc2849ad972696397c4ab463f63ae2fa1fdb5a96f6239b48d54ca4533a" ], "sizeBytes": 683822752 }, { "names": [ "docker-registry.default.svc:5000/test-rapp-blu/basepython@sha256:4c8cebcfa445cfa0f7d5f1fbc53042b4f6035b2cad2479f07d0b5186c34f3542", "docker-registry.default.svc:5000/test-rapp-blu/basepython:latest" ], "sizeBytes": 683822752 }, { "names": [ "docker-registry.default.svc:5000/sp-base/basepython@sha256:0294a6981acb5a51ca7d0447337019f645d13fdaf67f7e7d9d47a7c0ccf58d96", "docker-registry.default.svc:5000/sp-base/basepython:latest" ], "sizeBytes": 683822752 }, { "names": [ "docker-registry.default.svc:5000/test-rapp/basepython@sha256:c43503c10de9b75df55bbd2a5770a072441a0ed0a6ecc5f9e98f9299602e41fe", "docker-registry.default.svc:5000/test-rapp/basepython:latest" ], "sizeBytes": 683822752 }, { "names": [ "docker.io/centos/python-35-centos7@sha256:314cc72c9090e6d893c6371e239d6f632fb07fd971ffc5df39ef542b9da54c30", "docker.io/centos/python-35-centos7:latest" ], "sizeBytes": 680711248 }, { "names": [ "docker-registry.default.svc:5000/automation-basisprod/autopython35@sha256:a28aaaf3fb0217fb8696919e2e92ec0aae0e5041b1f0e22cf0bf9b8470af7acc", "docker-registry.default.svc:5000/automation-test2/autopython35@sha256:a28aaaf3fb0217fb8696919e2e92ec0aae0e5041b1f0e22cf0bf9b8470af7acc" ], "sizeBytes": 644972333 }, { "names": [ "docker-registry.default.svc:5000/automation-heine/autopython35_taggingclient@sha256:312de12001785159cc7f48016f021030c185290928d726d3792260531983e24b", "docker-registry.default.svc:5000/automation-heine1/autopython35_taggingclient@sha256:312de12001785159cc7f48016f021030c185290928d726d3792260531983e24b" ], "sizeBytes": 629245986 }, { "names": [ "docker-registry.default.svc:5000/automation-heine-blu/autopython35@sha256:61b94468c160728b3003b6503eff42d6d232ad7b8ed54175a0a81e45bebe4d86", "docker-registry.default.svc:5000/automation-heine/autopython35@sha256:61b94468c160728b3003b6503eff42d6d232ad7b8ed54175a0a81e45bebe4d86", "docker-registry.default.svc:5000/automation-heine1/autopython35@sha256:61b94468c160728b3003b6503eff42d6d232ad7b8ed54175a0a81e45bebe4d86" ], "sizeBytes": 629245558 } ], "nodeInfo": { "architecture": "amd64", "bootID": "40912309-94a5-46e0-8542-26f4806b9914", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "422A0783-7155-AC41-2292-5EE89A8FF6FB" } } } ], "returncode": 0 }, "state": "list" } META: ran handlers META: ran handlers PLAY [Restart nodes] ******************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [restart node] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:32 Wednesday 09 January 2019 15:52:11 +0100 (0:00:11.512) 0:12:46.080 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node09.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TimeoutStopUSec": "1min 30s", "ControlGroup": "/system.slice/atomic-openshift-node.service", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ActiveExitTimestamp": "Wed 2019-01-09 14:53:12 CET", "ExecMainCode": "0", "UnitFileState": "enabled", "ExecMainPID": "44950", "LimitSIGPENDING": "63382", "FileDescriptorStoreMax": "0", "LoadState": "loaded", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "10162261270047", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "10162261270129", "StandardError": "inherit", "AssertTimestamp": "Wed 2019-01-09 14:53:12 CET", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "InactiveExitTimestamp": "Wed 2019-01-09 14:53:12 CET", "WatchdogTimestamp": "Wed 2019-01-09 14:53:12 CET", "NoNewPrivileges": "no", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "Before": "multi-user.target shutdown.target", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "10162260538128", "SendSIGHUP": "no", "TimeoutStartUSec": "5min", "Type": "notify", "SyslogPriority": "30", "SameProcessGroup": "no", "MountFlags": "0", "LimitNPROC": "63382", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "ExecMainStartTimestamp": "Wed 2019-01-09 14:53:12 CET", "SyslogIdentifier": "atomic-openshift-node", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "-999", "Documentation": "https://github.com/openshift/origin", "StartLimitBurst": "5", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "SecureBits": "0", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "running", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestampMonotonic": "10162260536552", "MainPID": "44950", "StartupBlockIOWeight": "18446744073709551615", "ActiveEnterTimestamp": "Wed 2019-01-09 14:53:12 CET", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "active", "Nice": "0", "LimitDATA": "18446744073709551615", "UnitFilePreset": "disabled", "MemoryCurrent": "92631040", "LimitRTTIME": "18446744073709551615", "WantedBy": "multi-user.target", "TasksCurrent": "18446744073709551615", "RestartUSec": "5s", "ConditionTimestamp": "Wed 2019-01-09 14:53:12 CET", "CPUAccounting": "yes", "RemainAfterExit": "no", "RequiresMountsFor": "/var/lib/origin", "PrivateNetwork": "no", "Restart": "always", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "RefuseManualStop": "no", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "InactiveEnterTimestamp": "Wed 2019-01-09 14:53:12 CET", "StandardInput": "null", "AssertTimestampMonotonic": "10162260536552", "DefaultDependencies": "yes", "Requires": "var.mount -.mount basic.target", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "Slice": "system.slice", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "main", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "PrivateTmp": "no", "OnFailureJobMode": "replace", "AssertResult": "yes", "LimitLOCKS": "18446744073709551615", "ExecMainStartTimestampMonotonic": "10162260538075", "AllowIsolate": "no", "Wants": "docker.service dnsmasq.service system.slice", "After": "basic.target ntpd.service var.mount systemd-journald.socket dnsmasq.service -.mount system.slice chronyd.service docker.service", "FailureAction": "none", "CanIsolate": "no", "Conflicts": "shutdown.target", "StandardOutput": "journal", "WorkingDirectory": "/var/lib/origin", "InactiveEnterTimestampMonotonic": "10162260524552", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "Transient": "no", "IOScheduling": "0", "Description": "OpenShift Node", "ActiveExitTimestampMonotonic": "10162260510278", "CanReload": "no", "ControlPID": "0", "LimitNICE": "0", "BlockIOWeight": "18446744073709551615", "Names": "atomic-openshift-node.service", "ProtectSystem": "no", "PrivateDevices": "no", "Id": "atomic-openshift-node.service"}, "invocation": {"module_args": {"daemon-reload": true, "force": null, "name": "atomic-openshift-node", "enabled": null, "daemon_reload": true, "state": "restarted", "no_block": false, "user": false, "masked": null}}, "state": "started", "changed": true, "name": "atomic-openshift-node"}\n', '') changed: [sp-os-node09.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "daemon-reload": true, "daemon_reload": true, "enabled": null, "force": null, "masked": null, "name": "atomic-openshift-node", "no_block": false, "state": "restarted", "user": false } }, "name": "atomic-openshift-node", "state": "started", "status": { "ActiveEnterTimestamp": "Wed 2019-01-09 14:53:12 CET", "ActiveEnterTimestampMonotonic": "10162261270129", "ActiveExitTimestamp": "Wed 2019-01-09 14:53:12 CET", "ActiveExitTimestampMonotonic": "10162260510278", "ActiveState": "active", "After": "basic.target ntpd.service var.mount systemd-journald.socket dnsmasq.service -.mount system.slice chronyd.service docker.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-01-09 14:53:12 CET", "AssertTimestampMonotonic": "10162260536552", "Before": "multi-user.target shutdown.target", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-01-09 14:53:12 CET", "ConditionTimestampMonotonic": "10162260536552", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/atomic-openshift-node.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "OpenShift Node", "DevicePolicy": "auto", "Documentation": "https://github.com/openshift/origin", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "44950", "ExecMainStartTimestamp": "Wed 2019-01-09 14:53:12 CET", "ExecMainStartTimestampMonotonic": "10162260538075", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "atomic-openshift-node.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-01-09 14:53:12 CET", "InactiveEnterTimestampMonotonic": "10162260524552", "InactiveExitTimestamp": "Wed 2019-01-09 14:53:12 CET", "InactiveExitTimestampMonotonic": "10162260538128", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "63382", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "63382", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "44950", "MemoryAccounting": "yes", "MemoryCurrent": "92631040", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "atomic-openshift-node.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "-999", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "var.mount -.mount basic.target", "RequiresMountsFor": "/var/lib/origin", "Restart": "always", "RestartUSec": "5s", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogIdentifier": "atomic-openshift-node", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker.service dnsmasq.service system.slice", "WatchdogTimestamp": "Wed 2019-01-09 14:53:12 CET", "WatchdogTimestampMonotonic": "10162261270047", "WatchdogUSec": "0", "WorkingDirectory": "/var/lib/origin" } } TASK [Wait for node to be ready] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:38 Wednesday 09 January 2019 15:52:13 +0100 (0:00:01.438) 0:12:47.518 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node09.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node09.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249844Ki"}, "addresses": [{"type": "InternalIP", "address": "172.29.80.170"}, {"type": "Hostname", "address": "sp-os-node09.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "abe8bba6-5851-4e84-be3d-9858381f30a0", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "420A94E6-C957-CF85-F333-61A93D27FF6D", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147444Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-09-13T23:02:26Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:52:13Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-09-13T23:02:26Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:52:13Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-12-21T07:15:37Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:52:13Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:52:13Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:52:13Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:10:57Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:52:13Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node09.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node09.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-FFM-KL75", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node09.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871552", "creationTimestamp": "2018-07-18T14:21:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node09.os.ad.scanplus.de", "uid": "cf4f6b4b-8a95-11e8-a1e7-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (36 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node09.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node09.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-07-18T14:21:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node09.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "odd", "zone": "RZ-FFM-KL75" }, "name": "sp-os-node09.os.ad.scanplus.de", "resourceVersion": "93871552", "selfLink": "/api/v1/nodes/sp-os-node09.os.ad.scanplus.de", "uid": "cf4f6b4b-8a95-11e8-a1e7-005056aa3492" }, "spec": { "externalID": "sp-os-node09.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.29.80.170", "type": "InternalIP" }, { "address": "sp-os-node09.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147444Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249844Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:52:13Z", "lastTransitionTime": "2018-09-13T23:02:26Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:52:13Z", "lastTransitionTime": "2018-09-13T23:02:26Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:13Z", "lastTransitionTime": "2018-12-21T07:15:37Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:13Z", "lastTransitionTime": "2019-01-09T14:52:13Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:52:13Z", "lastTransitionTime": "2018-09-13T22:10:57Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "abe8bba6-5851-4e84-be3d-9858381f30a0", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "420A94E6-C957-CF85-F333-61A93D27FF6D" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node09.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node09.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249844Ki"}, "addresses": [{"type": "InternalIP", "address": "172.29.80.170"}, {"type": "Hostname", "address": "sp-os-node09.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "abe8bba6-5851-4e84-be3d-9858381f30a0", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "420A94E6-C957-CF85-F333-61A93D27FF6D", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147444Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-09-13T23:02:26Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:52:13Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-09-13T23:02:26Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:52:13Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-12-21T07:15:37Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:52:13Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:52:13Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:52:13Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:10:57Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:52:13Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node09.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node09.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-FFM-KL75", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node09.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871552", "creationTimestamp": "2018-07-18T14:21:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node09.os.ad.scanplus.de", "uid": "cf4f6b4b-8a95-11e8-a1e7-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (35 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node09.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node09.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-07-18T14:21:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node09.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "odd", "zone": "RZ-FFM-KL75" }, "name": "sp-os-node09.os.ad.scanplus.de", "resourceVersion": "93871552", "selfLink": "/api/v1/nodes/sp-os-node09.os.ad.scanplus.de", "uid": "cf4f6b4b-8a95-11e8-a1e7-005056aa3492" }, "spec": { "externalID": "sp-os-node09.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.29.80.170", "type": "InternalIP" }, { "address": "sp-os-node09.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147444Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249844Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:52:13Z", "lastTransitionTime": "2018-09-13T23:02:26Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:52:13Z", "lastTransitionTime": "2018-09-13T23:02:26Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:13Z", "lastTransitionTime": "2018-12-21T07:15:37Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:13Z", "lastTransitionTime": "2019-01-09T14:52:13Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:52:13Z", "lastTransitionTime": "2018-09-13T22:10:57Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "abe8bba6-5851-4e84-be3d-9858381f30a0", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "420A94E6-C957-CF85-F333-61A93D27FF6D" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node09.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node09.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249844Ki"}, "addresses": [{"type": "InternalIP", "address": "172.29.80.170"}, {"type": "Hostname", "address": "sp-os-node09.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "abe8bba6-5851-4e84-be3d-9858381f30a0", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "420A94E6-C957-CF85-F333-61A93D27FF6D", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147444Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1863493281, "names": ["docker-registry.default.svc:5000/automation-maier/networkapi@sha256:b684feedc596427e7af497f773e90470ed253253eac46a6615cf056f3630a45f"]}, {"sizeBytes": 1863488428, "names": ["docker-registry.default.svc:5000/automation-prodtest/networkapi@sha256:09dd40d596d5c86462efabafb37487369280fccebd6ce221ec00c710e2543554", "docker-registry.default.svc:5000/automation-prodtest/networkapi:latest"]}, {"sizeBytes": 1862991681, "names": ["docker-registry.default.svc:5000/automation-maier/autopython35_networkapi@sha256:0e1ea228d2ffaeae7152c69b5ceaf6c3a32e58462ec79874613ddf816fcc0251", "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi:latest"]}, {"sizeBytes": 1862981742, "names": ["docker-registry.default.svc:5000/automation-prodtest/autopython35_networkapi@sha256:5d887de228c14d92aebc17c413be9d66261bd173a23114ed80dedb18a2fb6cda", "docker-registry.default.svc:5000/automation-prodtest/autopython35_networkapi:latest"]}, {"sizeBytes": 1853980686, "names": ["docker-registry.default.svc:5000/automation-haertenstein/networkapi@sha256:885a3d39a23560372143b795d2e5de1356a39d66e833d60704ea8a051b1fefb7"]}, {"sizeBytes": 1839144550, "names": ["docker-registry.default.svc:5000/automation-gleim/networkapi@sha256:228ba404eb03f578a1b5f70a02000f8c369a0c8822f28d54fb88d2a9a5a33551"]}, {"sizeBytes": 1838027836, "names": ["docker-registry.default.svc:5000/automation-ziesel/networkapi@sha256:02d6cc2f3fda077caf0b8f7a9a6ed5b3b6d741aed903d362b9f4196ef1b4fefd"]}, {"sizeBytes": 1367350899, "names": ["docker-registry.default.svc:5000/automation-cgl-blu/aidabluworkflows@sha256:88bed090160b15eab8b146dd86ef20120d23820c811aa1c4cb6cb5a9a021874a"]}, {"sizeBytes": 1272710652, "names": ["docker-registry.default.svc:5000/aidablu-prod/aidabluworkflows@sha256:371a3882b41c78020bafb81fd8181f7081356fe89c75aa77879f9a21f644616a"]}, {"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1247416602, "names": ["docker-registry.default.svc:5000/aidablu-test/aidabluworkflows@sha256:b9de3318443ff53ac835fd0dbe48f940359e135f62b4c183051d0cab23472cd5"]}, {"sizeBytes": 1237248586, "names": ["docker-registry.default.svc:5000/automation-prodtest-blu/aidabluworkflows@sha256:0629c7cf74518c0513b0ff3f8ec4d56a2dc76738b4e181bc38ac56be9934cb51"]}, {"sizeBytes": 1237095818, "names": ["docker-registry.default.svc:5000/automation-blu-qa-managed-connectivity/aidabluworkflows@sha256:0711f196c373b1d1b2685156da70d96b69eaef432736c80c03243947a0046a8b"]}, {"sizeBytes": 1237078155, "names": ["docker-registry.default.svc:5000/aidablu-qa/aidabluworkflows@sha256:4f00ff28a542d825ebea14278fbee18453b0aa40c390e4182f26269818c3ca29", "docker-registry.default.svc:5000/aidablu-qa/aidabluworkflows:latest"]}, {"sizeBytes": 1227937571, "names": ["registry.spdev.net/aidablu/mistral@sha256:df6af8573fe9d83b52150ebf5bf2b0248d0a50da35521af273f43c8cd8911617", "registry.spdev.net/aidablu/mistral:latest"]}, {"sizeBytes": 1196488450, "names": ["registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 1168830756, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:8cd4ff22e4f7f816278b91d45da9159dde80fdd3b1e8d5a9ed9d23b3de19f833"]}, {"sizeBytes": 1168830720, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:ba4324c58bfefdc84b448e1fdd188d40af887681d62c35a57b8bc3d76d0ce398", "docker-registry.default.svc:5000/automation-schoenthaler/networkapi:latest"]}, {"sizeBytes": 1168826513, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:2c9a3d44189e80b02a2e858d0f9fb2406814a8d3e5e48c4453ce5d8c6609158f"]}, {"sizeBytes": 1168826497, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:74a40bcbfb6deff62d8f4c477746835ad72ebd2e45e0e763cd0502bf899d1cbc"]}, {"sizeBytes": 1168826441, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:df3f80df3daf06c5d01fec49204fd8e781ca30a4446d2edf9fbe2a516ce84710"]}, {"sizeBytes": 1066868142, "names": ["docker-registry.default.svc:5000/automation-puscasu/networkapi@sha256:44618946aa3b74ff466b1567a044d3ae207a5097d39ef96f97b838fba25ff04a", "docker-registry.default.svc:5000/automation-puscasu/networkapi:latest"]}, {"sizeBytes": 1056150462, "names": ["docker-registry.default.svc:5000/automation-prodtest/sshclient@sha256:625f0c29a716dac1d8117751ac0dc1f990cbdb8f05561c6908727c8a8ae93c78"]}, {"sizeBytes": 1056150151, "names": ["docker-registry.default.svc:5000/automation-gleim/sshclient@sha256:aed4298913a4a27a028afeb9a86babacfcdb819e159b43f893c85162cb6b4bbe", "docker-registry.default.svc:5000/automation-gleim/sshclient:latest"]}, {"sizeBytes": 1056137402, "names": ["docker-registry.default.svc:5000/automation-haertenstein/sshclient@sha256:ed457799a50707bb68824300527778ac0728aaf91ba3787710810b748fa87acc"]}, {"sizeBytes": 1056004809, "names": ["docker-registry.default.svc:5000/automation-prodtest/autopython35_sshclient@sha256:592371614115e80a12cb3b27f6a2165562a673c1450965aecf201793bf8acb99", "docker-registry.default.svc:5000/automation-prodtest/autopython35_sshclient:latest"]}, {"sizeBytes": 1056004383, "names": ["docker-registry.default.svc:5000/automation-gleim/autopython35_sshclient@sha256:4fe89665d6b7780eb87a5e32afbd96b22a049e2d73420442eafdf44db214b385"]}, {"sizeBytes": 1056004369, "names": ["docker-registry.default.svc:5000/automation-prod/autopython35_sshclient@sha256:6adb614ea21893bcdde493c0a400030b3d266a1ea5d39d33b2c1dbe69d33b35e", "docker-registry.default.svc:5000/automation-prod/autopython35_sshclient:latest"]}, {"sizeBytes": 1054461367, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/autopython35_sshclient@sha256:5c3b3d4922d61fa038bb30a79bb8661c77430d068a5d1118c1d7519562c3d536"]}, {"sizeBytes": 1054427580, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/autopython35_sshclient@sha256:3cdd43583a81384c3ceb0be0b419efb31d0f3285a422f35bf3e8b33612e54368"]}, {"sizeBytes": 1022591864, "names": ["docker-registry.default.svc:5000/automation-puscasu/autopython35_networkapi@sha256:0904670f3b5ef1b7b1900af238dccf247ee5569c39be773462669621936c22ea", "docker-registry.default.svc:5000/automation-puscasu/autopython35_networkapi:latest"]}, {"sizeBytes": 1022522195, "names": ["docker-registry.default.svc:5000/automation-puscasu/autopython35_networkapi@sha256:842e615ea62d89270b67561776361dc3bd990865b60455c16537d1109e415d07"]}, {"sizeBytes": 1022461350, "names": ["docker-registry.default.svc:5000/automation-prodtest/taggingclient@sha256:50e1bb8b875ca6d5b990fd804be5efd0284796bdae03f45706a4b636ac3b9857"]}, {"sizeBytes": 971288001, "names": ["docker-registry.default.svc:5000/automation-ziesel/taggingclient@sha256:e42def2ee32ba821c4c415fc87d821cd18033a3f86f5851bce25033df10aae0b", "docker-registry.default.svc:5000/automation-ziesel/taggingclient:latest"]}, {"sizeBytes": 971133434, "names": ["docker-registry.default.svc:5000/automation-ziesel/autopython35_taggingclient@sha256:82b01c49a390eb664a4abc15b8e7251c06d651b62f7ced8201d9f39d0767a946"]}, {"sizeBytes": 971069328, "names": ["docker-registry.default.svc:5000/automation-puscasu/autopython35_taggingclient@sha256:82ed341599bd28e0fbf726c28f8f9dfe8ade9615abe049660446a3b363e4163f"]}, {"sizeBytes": 928280153, "names": ["docker-registry.default.svc:5000/automation-prodtest/dnsclient@sha256:2e4b645d91ab79eff558a15e61ef2668cd3e7bfe78b35c2fed83822e2a271c7a"]}, {"sizeBytes": 928278127, "names": ["docker-registry.default.svc:5000/automation-prodtest/vcenterfileclient@sha256:680a2da0a95f171077b8e96b7ab4e2d57072dfb55adc920dad7cc784f8409e06"]}, {"sizeBytes": 881183040, "names": ["docker-registry.default.svc:5000/automation-ziesel/automationapi@sha256:b84b7a76cc52cc66487632438406b38ff1cd35fd75fa4f7a138ff139d8d35755"]}, {"sizeBytes": 877838023, "names": ["docker-registry.default.svc:5000/automation-prod/aciapi@sha256:128588d042f417f37abf945d2ad32be0798d7c7d75305230d3260e26ba4480c8"]}, {"sizeBytes": 877801505, "names": ["docker-registry.default.svc:5000/automation-gleim/vcenterfileclient@sha256:ced225377585c26605de49305f288e0cf1e4619549949bb7b37264e4470d3a67", "docker-registry.default.svc:5000/automation-gleim/vcenterfileclient:latest"]}, {"sizeBytes": 877799517, "names": ["docker-registry.default.svc:5000/automation-prodtest/dnsclient@sha256:7096b85c0ff4e248413d94e266bae9e1752a4d8b6eb00b0294997774b9119b8f", "docker-registry.default.svc:5000/automation-prodtest/dnsclient:latest"]}, {"sizeBytes": 877798923, "names": ["docker-registry.default.svc:5000/automation-gleim/dnsclient@sha256:3cfa01f1c868a1664587180e64ac66dd2bfbb20eeade1912b3c52faa91e0f24d", "docker-registry.default.svc:5000/automation-gleim/dnsclient:latest"]}, {"sizeBytes": 877796666, "names": ["docker-registry.default.svc:5000/automation-prodtest/ftpclient@sha256:1c37b83ddcc56e2308d28a7c66043032afd158be16f19c18002a355283cb9615"]}, {"sizeBytes": 877796412, "names": ["docker-registry.default.svc:5000/automation-ziesel/ftpclient@sha256:f5aec16035defd3b78dae96b2a243333d05de3462385d05a9bb55364b6ee1d6f", "docker-registry.default.svc:5000/automation-ziesel/ftpclient:latest"]}, {"sizeBytes": 877716514, "names": ["docker-registry.default.svc:5000/automation-maier/autopython35@sha256:0c71d5251991e6fde82e4fe9d2413ff2f25d092eb2f7aa968f9294dc593d4bcd"]}, {"sizeBytes": 877716262, "names": ["docker-registry.default.svc:5000/automation-gleim/autopython35@sha256:8f47ed6f2a6d94d81bc39b368de3ec7d2aed48daf23e1c10ec085cbecfd29dc0"]}, {"sizeBytes": 877705841, "names": ["docker-registry.default.svc:5000/automation-prodtest/autopython35@sha256:acc862ede3fd33ee14d159244406554f96e1e7c5a4aade11c48b3934e49b8a78", "docker-registry.default.svc:5000/automation-prodtest/autopython35:latest"]}, {"sizeBytes": 877705474, "names": ["docker-registry.default.svc:5000/automation-ziesel/autopython35@sha256:05eeb14f717c95193452f75a622fe6fc2c72a3bd3a507ef0a4dbbce7b2aaf39d"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-09-13T23:02:26Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:52:23Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-09-13T23:02:26Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:52:23Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2018-12-21T07:15:37Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:52:23Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "True", "lastTransitionTime": "2019-01-09T14:52:23Z", "reason": "KubeletReady", "lastHeartbeatTime": "2019-01-09T14:52:23Z", "message": "kubelet is posting ready status", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:10:57Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:52:23Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node09.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node09.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-FFM-KL75", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node09.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871608", "creationTimestamp": "2018-07-18T14:21:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node09.os.ad.scanplus.de", "uid": "cf4f6b4b-8a95-11e8-a1e7-005056aa3492"}}]}}\n', '') ok: [sp-os-node09.os.ad.scanplus.de -> sp-os-master01.os.ad.scanplus.de] => { "attempts": 3, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node09.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node09.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-07-18T14:21:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node09.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "odd", "zone": "RZ-FFM-KL75" }, "name": "sp-os-node09.os.ad.scanplus.de", "resourceVersion": "93871608", "selfLink": "/api/v1/nodes/sp-os-node09.os.ad.scanplus.de", "uid": "cf4f6b4b-8a95-11e8-a1e7-005056aa3492" }, "spec": { "externalID": "sp-os-node09.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.29.80.170", "type": "InternalIP" }, { "address": "sp-os-node09.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147444Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249844Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:52:23Z", "lastTransitionTime": "2018-09-13T23:02:26Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:52:23Z", "lastTransitionTime": "2018-09-13T23:02:26Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:23Z", "lastTransitionTime": "2018-12-21T07:15:37Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:23Z", "lastTransitionTime": "2019-01-09T14:52:23Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:52:23Z", "lastTransitionTime": "2018-09-13T22:10:57Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "docker-registry.default.svc:5000/automation-maier/networkapi@sha256:b684feedc596427e7af497f773e90470ed253253eac46a6615cf056f3630a45f" ], "sizeBytes": 1863493281 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/networkapi@sha256:09dd40d596d5c86462efabafb37487369280fccebd6ce221ec00c710e2543554", "docker-registry.default.svc:5000/automation-prodtest/networkapi:latest" ], "sizeBytes": 1863488428 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi@sha256:0e1ea228d2ffaeae7152c69b5ceaf6c3a32e58462ec79874613ddf816fcc0251", "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi:latest" ], "sizeBytes": 1862991681 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/autopython35_networkapi@sha256:5d887de228c14d92aebc17c413be9d66261bd173a23114ed80dedb18a2fb6cda", "docker-registry.default.svc:5000/automation-prodtest/autopython35_networkapi:latest" ], "sizeBytes": 1862981742 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/networkapi@sha256:885a3d39a23560372143b795d2e5de1356a39d66e833d60704ea8a051b1fefb7" ], "sizeBytes": 1853980686 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/networkapi@sha256:228ba404eb03f578a1b5f70a02000f8c369a0c8822f28d54fb88d2a9a5a33551" ], "sizeBytes": 1839144550 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/networkapi@sha256:02d6cc2f3fda077caf0b8f7a9a6ed5b3b6d741aed903d362b9f4196ef1b4fefd" ], "sizeBytes": 1838027836 }, { "names": [ "docker-registry.default.svc:5000/automation-cgl-blu/aidabluworkflows@sha256:88bed090160b15eab8b146dd86ef20120d23820c811aa1c4cb6cb5a9a021874a" ], "sizeBytes": 1367350899 }, { "names": [ "docker-registry.default.svc:5000/aidablu-prod/aidabluworkflows@sha256:371a3882b41c78020bafb81fd8181f7081356fe89c75aa77879f9a21f644616a" ], "sizeBytes": 1272710652 }, { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "docker-registry.default.svc:5000/aidablu-test/aidabluworkflows@sha256:b9de3318443ff53ac835fd0dbe48f940359e135f62b4c183051d0cab23472cd5" ], "sizeBytes": 1247416602 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest-blu/aidabluworkflows@sha256:0629c7cf74518c0513b0ff3f8ec4d56a2dc76738b4e181bc38ac56be9934cb51" ], "sizeBytes": 1237248586 }, { "names": [ "docker-registry.default.svc:5000/automation-blu-qa-managed-connectivity/aidabluworkflows@sha256:0711f196c373b1d1b2685156da70d96b69eaef432736c80c03243947a0046a8b" ], "sizeBytes": 1237095818 }, { "names": [ "docker-registry.default.svc:5000/aidablu-qa/aidabluworkflows@sha256:4f00ff28a542d825ebea14278fbee18453b0aa40c390e4182f26269818c3ca29", "docker-registry.default.svc:5000/aidablu-qa/aidabluworkflows:latest" ], "sizeBytes": 1237078155 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:df6af8573fe9d83b52150ebf5bf2b0248d0a50da35521af273f43c8cd8911617", "registry.spdev.net/aidablu/mistral:latest" ], "sizeBytes": 1227937571 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0" ], "sizeBytes": 1196488450 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:8cd4ff22e4f7f816278b91d45da9159dde80fdd3b1e8d5a9ed9d23b3de19f833" ], "sizeBytes": 1168830756 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:ba4324c58bfefdc84b448e1fdd188d40af887681d62c35a57b8bc3d76d0ce398", "docker-registry.default.svc:5000/automation-schoenthaler/networkapi:latest" ], "sizeBytes": 1168830720 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:2c9a3d44189e80b02a2e858d0f9fb2406814a8d3e5e48c4453ce5d8c6609158f" ], "sizeBytes": 1168826513 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:74a40bcbfb6deff62d8f4c477746835ad72ebd2e45e0e763cd0502bf899d1cbc" ], "sizeBytes": 1168826497 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:df3f80df3daf06c5d01fec49204fd8e781ca30a4446d2edf9fbe2a516ce84710" ], "sizeBytes": 1168826441 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/networkapi@sha256:44618946aa3b74ff466b1567a044d3ae207a5097d39ef96f97b838fba25ff04a", "docker-registry.default.svc:5000/automation-puscasu/networkapi:latest" ], "sizeBytes": 1066868142 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/sshclient@sha256:625f0c29a716dac1d8117751ac0dc1f990cbdb8f05561c6908727c8a8ae93c78" ], "sizeBytes": 1056150462 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/sshclient@sha256:aed4298913a4a27a028afeb9a86babacfcdb819e159b43f893c85162cb6b4bbe", "docker-registry.default.svc:5000/automation-gleim/sshclient:latest" ], "sizeBytes": 1056150151 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/sshclient@sha256:ed457799a50707bb68824300527778ac0728aaf91ba3787710810b748fa87acc" ], "sizeBytes": 1056137402 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/autopython35_sshclient@sha256:592371614115e80a12cb3b27f6a2165562a673c1450965aecf201793bf8acb99", "docker-registry.default.svc:5000/automation-prodtest/autopython35_sshclient:latest" ], "sizeBytes": 1056004809 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/autopython35_sshclient@sha256:4fe89665d6b7780eb87a5e32afbd96b22a049e2d73420442eafdf44db214b385" ], "sizeBytes": 1056004383 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/autopython35_sshclient@sha256:6adb614ea21893bcdde493c0a400030b3d266a1ea5d39d33b2c1dbe69d33b35e", "docker-registry.default.svc:5000/automation-prod/autopython35_sshclient:latest" ], "sizeBytes": 1056004369 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/autopython35_sshclient@sha256:5c3b3d4922d61fa038bb30a79bb8661c77430d068a5d1118c1d7519562c3d536" ], "sizeBytes": 1054461367 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/autopython35_sshclient@sha256:3cdd43583a81384c3ceb0be0b419efb31d0f3285a422f35bf3e8b33612e54368" ], "sizeBytes": 1054427580 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/autopython35_networkapi@sha256:0904670f3b5ef1b7b1900af238dccf247ee5569c39be773462669621936c22ea", "docker-registry.default.svc:5000/automation-puscasu/autopython35_networkapi:latest" ], "sizeBytes": 1022591864 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/autopython35_networkapi@sha256:842e615ea62d89270b67561776361dc3bd990865b60455c16537d1109e415d07" ], "sizeBytes": 1022522195 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/taggingclient@sha256:50e1bb8b875ca6d5b990fd804be5efd0284796bdae03f45706a4b636ac3b9857" ], "sizeBytes": 1022461350 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/taggingclient@sha256:e42def2ee32ba821c4c415fc87d821cd18033a3f86f5851bce25033df10aae0b", "docker-registry.default.svc:5000/automation-ziesel/taggingclient:latest" ], "sizeBytes": 971288001 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/autopython35_taggingclient@sha256:82b01c49a390eb664a4abc15b8e7251c06d651b62f7ced8201d9f39d0767a946" ], "sizeBytes": 971133434 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/autopython35_taggingclient@sha256:82ed341599bd28e0fbf726c28f8f9dfe8ade9615abe049660446a3b363e4163f" ], "sizeBytes": 971069328 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/dnsclient@sha256:2e4b645d91ab79eff558a15e61ef2668cd3e7bfe78b35c2fed83822e2a271c7a" ], "sizeBytes": 928280153 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/vcenterfileclient@sha256:680a2da0a95f171077b8e96b7ab4e2d57072dfb55adc920dad7cc784f8409e06" ], "sizeBytes": 928278127 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/automationapi@sha256:b84b7a76cc52cc66487632438406b38ff1cd35fd75fa4f7a138ff139d8d35755" ], "sizeBytes": 881183040 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/aciapi@sha256:128588d042f417f37abf945d2ad32be0798d7c7d75305230d3260e26ba4480c8" ], "sizeBytes": 877838023 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/vcenterfileclient@sha256:ced225377585c26605de49305f288e0cf1e4619549949bb7b37264e4470d3a67", "docker-registry.default.svc:5000/automation-gleim/vcenterfileclient:latest" ], "sizeBytes": 877801505 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/dnsclient@sha256:7096b85c0ff4e248413d94e266bae9e1752a4d8b6eb00b0294997774b9119b8f", "docker-registry.default.svc:5000/automation-prodtest/dnsclient:latest" ], "sizeBytes": 877799517 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/dnsclient@sha256:3cfa01f1c868a1664587180e64ac66dd2bfbb20eeade1912b3c52faa91e0f24d", "docker-registry.default.svc:5000/automation-gleim/dnsclient:latest" ], "sizeBytes": 877798923 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/ftpclient@sha256:1c37b83ddcc56e2308d28a7c66043032afd158be16f19c18002a355283cb9615" ], "sizeBytes": 877796666 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/ftpclient@sha256:f5aec16035defd3b78dae96b2a243333d05de3462385d05a9bb55364b6ee1d6f", "docker-registry.default.svc:5000/automation-ziesel/ftpclient:latest" ], "sizeBytes": 877796412 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/autopython35@sha256:0c71d5251991e6fde82e4fe9d2413ff2f25d092eb2f7aa968f9294dc593d4bcd" ], "sizeBytes": 877716514 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/autopython35@sha256:8f47ed6f2a6d94d81bc39b368de3ec7d2aed48daf23e1c10ec085cbecfd29dc0" ], "sizeBytes": 877716262 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/autopython35@sha256:acc862ede3fd33ee14d159244406554f96e1e7c5a4aade11c48b3934e49b8a78", "docker-registry.default.svc:5000/automation-prodtest/autopython35:latest" ], "sizeBytes": 877705841 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/autopython35@sha256:05eeb14f717c95193452f75a622fe6fc2c72a3bd3a507ef0a4dbbce7b2aaf39d" ], "sizeBytes": 877705474 } ], "nodeInfo": { "architecture": "amd64", "bootID": "abe8bba6-5851-4e84-be3d-9858381f30a0", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "420A94E6-C957-CF85-F333-61A93D27FF6D" } } } ], "returncode": 0 }, "state": "list" } META: ran handlers META: ran handlers PLAY [Restart nodes] ******************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [restart node] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:32 Wednesday 09 January 2019 15:52:24 +0100 (0:00:11.368) 0:12:58.887 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node10.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TimeoutStopUSec": "1min 30s", "ControlGroup": "/system.slice/atomic-openshift-node.service", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ActiveExitTimestamp": "Wed 2019-01-09 14:53:25 CET", "ExecMainCode": "0", "UnitFileState": "enabled", "ExecMainPID": "23233", "LimitSIGPENDING": "63382", "FileDescriptorStoreMax": "0", "LoadState": "loaded", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "6243014581031", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "6243014581094", "StandardError": "inherit", "AssertTimestamp": "Wed 2019-01-09 14:53:25 CET", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "InactiveExitTimestamp": "Wed 2019-01-09 14:53:25 CET", "WatchdogTimestamp": "Wed 2019-01-09 14:53:25 CET", "NoNewPrivileges": "no", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "Before": "shutdown.target multi-user.target", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "6243014137759", "SendSIGHUP": "no", "TimeoutStartUSec": "5min", "Type": "notify", "SyslogPriority": "30", "SameProcessGroup": "no", "MountFlags": "0", "LimitNPROC": "63382", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "ExecMainStartTimestamp": "Wed 2019-01-09 14:53:25 CET", "SyslogIdentifier": "atomic-openshift-node", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "-999", "Documentation": "https://github.com/openshift/origin", "StartLimitBurst": "5", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "SecureBits": "0", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "running", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestampMonotonic": "6243014136289", "MainPID": "23233", "StartupBlockIOWeight": "18446744073709551615", "ActiveEnterTimestamp": "Wed 2019-01-09 14:53:25 CET", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "active", "Nice": "0", "LimitDATA": "18446744073709551615", "UnitFilePreset": "disabled", "MemoryCurrent": "89874432", "LimitRTTIME": "18446744073709551615", "WantedBy": "multi-user.target", "TasksCurrent": "18446744073709551615", "RestartUSec": "5s", "ConditionTimestamp": "Wed 2019-01-09 14:53:25 CET", "CPUAccounting": "yes", "RemainAfterExit": "no", "RequiresMountsFor": "/var/lib/origin", "PrivateNetwork": "no", "Restart": "always", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "RefuseManualStop": "no", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "InactiveEnterTimestamp": "Wed 2019-01-09 14:53:25 CET", "StandardInput": "null", "AssertTimestampMonotonic": "6243014136289", "DefaultDependencies": "yes", "Requires": "-.mount basic.target var.mount", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "Slice": "system.slice", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "main", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "PrivateTmp": "no", "OnFailureJobMode": "replace", "AssertResult": "yes", "LimitLOCKS": "18446744073709551615", "ExecMainStartTimestampMonotonic": "6243014137709", "AllowIsolate": "no", "Wants": "dnsmasq.service system.slice docker.service", "After": "ntpd.service var.mount dnsmasq.service basic.target docker.service system.slice chronyd.service -.mount systemd-journald.socket", "FailureAction": "none", "CanIsolate": "no", "Conflicts": "shutdown.target", "StandardOutput": "journal", "WorkingDirectory": "/var/lib/origin", "InactiveEnterTimestampMonotonic": "6243014127523", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "Transient": "no", "IOScheduling": "0", "Description": "OpenShift Node", "ActiveExitTimestampMonotonic": "6243014113686", "CanReload": "no", "ControlPID": "0", "LimitNICE": "0", "BlockIOWeight": "18446744073709551615", "Names": "atomic-openshift-node.service", "ProtectSystem": "no", "PrivateDevices": "no", "Id": "atomic-openshift-node.service"}, "invocation": {"module_args": {"daemon-reload": true, "force": null, "name": "atomic-openshift-node", "enabled": null, "daemon_reload": true, "state": "restarted", "no_block": false, "user": false, "masked": null}}, "state": "started", "changed": true, "name": "atomic-openshift-node"}\n', '') changed: [sp-os-node10.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "daemon-reload": true, "daemon_reload": true, "enabled": null, "force": null, "masked": null, "name": "atomic-openshift-node", "no_block": false, "state": "restarted", "user": false } }, "name": "atomic-openshift-node", "state": "started", "status": { "ActiveEnterTimestamp": "Wed 2019-01-09 14:53:25 CET", "ActiveEnterTimestampMonotonic": "6243014581094", "ActiveExitTimestamp": "Wed 2019-01-09 14:53:25 CET", "ActiveExitTimestampMonotonic": "6243014113686", "ActiveState": "active", "After": "ntpd.service var.mount dnsmasq.service basic.target docker.service system.slice chronyd.service -.mount systemd-journald.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-01-09 14:53:25 CET", "AssertTimestampMonotonic": "6243014136289", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-01-09 14:53:25 CET", "ConditionTimestampMonotonic": "6243014136289", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/atomic-openshift-node.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "OpenShift Node", "DevicePolicy": "auto", "Documentation": "https://github.com/openshift/origin", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "23233", "ExecMainStartTimestamp": "Wed 2019-01-09 14:53:25 CET", "ExecMainStartTimestampMonotonic": "6243014137709", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "atomic-openshift-node.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-01-09 14:53:25 CET", "InactiveEnterTimestampMonotonic": "6243014127523", "InactiveExitTimestamp": "Wed 2019-01-09 14:53:25 CET", "InactiveExitTimestampMonotonic": "6243014137759", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "63382", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "63382", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "23233", "MemoryAccounting": "yes", "MemoryCurrent": "89874432", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "atomic-openshift-node.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "-999", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "-.mount basic.target var.mount", "RequiresMountsFor": "/var/lib/origin", "Restart": "always", "RestartUSec": "5s", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogIdentifier": "atomic-openshift-node", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "dnsmasq.service system.slice docker.service", "WatchdogTimestamp": "Wed 2019-01-09 14:53:25 CET", "WatchdogTimestampMonotonic": "6243014581031", "WatchdogUSec": "0", "WorkingDirectory": "/var/lib/origin" } } TASK [Wait for node to be ready] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:38 Wednesday 09 January 2019 15:52:25 +0100 (0:00:01.292) 0:13:00.179 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node10.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node10.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249844Ki"}, "addresses": [{"type": "InternalIP", "address": "172.29.80.171"}, {"type": "Hostname", "address": "sp-os-node10.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "44c5e924-28ab-49a9-9b3e-315d4c607e62", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "420A2897-ACF0-7164-7E1B-287C3D5CBEB8", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147444Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-10-29T07:43:26Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:52:25Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-10-29T07:43:26Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:52:25Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T10:39:45Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:52:25Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:52:25Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:52:25Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:43:35Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:52:25Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node10.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node10.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-FFM-KL75", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node10.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871633", "creationTimestamp": "2018-07-18T14:21:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node10.os.ad.scanplus.de", "uid": "cf620e53-8a95-11e8-a1e7-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (36 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node10.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node10.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-07-18T14:21:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node10.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "even", "zone": "RZ-FFM-KL75" }, "name": "sp-os-node10.os.ad.scanplus.de", "resourceVersion": "93871633", "selfLink": "/api/v1/nodes/sp-os-node10.os.ad.scanplus.de", "uid": "cf620e53-8a95-11e8-a1e7-005056aa3492" }, "spec": { "externalID": "sp-os-node10.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.29.80.171", "type": "InternalIP" }, { "address": "sp-os-node10.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147444Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249844Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:52:25Z", "lastTransitionTime": "2018-10-29T07:43:26Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:52:25Z", "lastTransitionTime": "2018-10-29T07:43:26Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:25Z", "lastTransitionTime": "2019-01-09T10:39:45Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:25Z", "lastTransitionTime": "2019-01-09T14:52:25Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:52:25Z", "lastTransitionTime": "2018-09-13T22:43:35Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "44c5e924-28ab-49a9-9b3e-315d4c607e62", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "420A2897-ACF0-7164-7E1B-287C3D5CBEB8" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node10.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node10.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249844Ki"}, "addresses": [{"type": "InternalIP", "address": "172.29.80.171"}, {"type": "Hostname", "address": "sp-os-node10.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "44c5e924-28ab-49a9-9b3e-315d4c607e62", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "420A2897-ACF0-7164-7E1B-287C3D5CBEB8", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147444Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-10-29T07:43:26Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:52:25Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-10-29T07:43:26Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:52:25Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T10:39:45Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:52:25Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:52:25Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:52:25Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:43:35Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:52:25Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node10.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node10.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-FFM-KL75", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node10.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871633", "creationTimestamp": "2018-07-18T14:21:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node10.os.ad.scanplus.de", "uid": "cf620e53-8a95-11e8-a1e7-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (35 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node10.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node10.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-07-18T14:21:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node10.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "even", "zone": "RZ-FFM-KL75" }, "name": "sp-os-node10.os.ad.scanplus.de", "resourceVersion": "93871633", "selfLink": "/api/v1/nodes/sp-os-node10.os.ad.scanplus.de", "uid": "cf620e53-8a95-11e8-a1e7-005056aa3492" }, "spec": { "externalID": "sp-os-node10.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.29.80.171", "type": "InternalIP" }, { "address": "sp-os-node10.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147444Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249844Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:52:25Z", "lastTransitionTime": "2018-10-29T07:43:26Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:52:25Z", "lastTransitionTime": "2018-10-29T07:43:26Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:25Z", "lastTransitionTime": "2019-01-09T10:39:45Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:25Z", "lastTransitionTime": "2019-01-09T14:52:25Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:52:25Z", "lastTransitionTime": "2018-09-13T22:43:35Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "44c5e924-28ab-49a9-9b3e-315d4c607e62", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "420A2897-ACF0-7164-7E1B-287C3D5CBEB8" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node10.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node10.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249844Ki"}, "addresses": [{"type": "InternalIP", "address": "172.29.80.171"}, {"type": "Hostname", "address": "sp-os-node10.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "44c5e924-28ab-49a9-9b3e-315d4c607e62", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "420A2897-ACF0-7164-7E1B-287C3D5CBEB8", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147444Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1863487627, "names": ["docker-registry.default.svc:5000/automation-prod/networkapi@sha256:939250e885495554aa60d095d25b69ddf94327cb99ed0664e9226f8eab700ac8"]}, {"sizeBytes": 1272710652, "names": ["docker-registry.default.svc:5000/aidablu-prod/aidabluworkflows@sha256:371a3882b41c78020bafb81fd8181f7081356fe89c75aa77879f9a21f644616a"]}, {"sizeBytes": 1237078155, "names": ["docker-registry.default.svc:5000/aidablu-qa/aidabluworkflows@sha256:4f00ff28a542d825ebea14278fbee18453b0aa40c390e4182f26269818c3ca29"]}, {"sizeBytes": 1201295971, "names": ["docker-registry.default.svc:5000/automation-rapp/networkapi@sha256:3f6e6973075e50cee5a1f77a2a2538b4e9115dbb12acea3c21b1c5026acb4b3d"]}, {"sizeBytes": 1196488450, "names": ["registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0"]}, {"sizeBytes": 1169385027, "names": ["docker-registry.default.svc:5000/automation-prod/networkapi@sha256:8a6b7b7ce89c5442370f95b5c4d511d632bb28d3fde98713c3e50b6d5f928143", "docker-registry.default.svc:5000/automation-prod/networkapi:latest"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 1168831034, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:3591fcdb645cec07f197df6fb74aab7eb867a6882f8318c9fd2de3fed057b9ba"]}, {"sizeBytes": 1168830307, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:42dcde1f18de3f984a0503e293978514acecad998a9c934004c32e2576f7996a"]}, {"sizeBytes": 1168826497, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:74a40bcbfb6deff62d8f4c477746835ad72ebd2e45e0e763cd0502bf899d1cbc"]}, {"sizeBytes": 1056150022, "names": ["docker-registry.default.svc:5000/automation-prod/sshclient@sha256:65407d6eba4538bc657c7fddbb4146acf9d2991535792bd45629791274a712d2", "docker-registry.default.svc:5000/automation-prod/sshclient:latest"]}, {"sizeBytes": 1056004369, "names": ["docker-registry.default.svc:5000/automation-prod/autopython35_sshclient@sha256:6adb614ea21893bcdde493c0a400030b3d266a1ea5d39d33b2c1dbe69d33b35e"]}, {"sizeBytes": 1054461367, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/autopython35_sshclient@sha256:5c3b3d4922d61fa038bb30a79bb8661c77430d068a5d1118c1d7519562c3d536"]}, {"sizeBytes": 972013603, "names": ["docker-registry.default.svc:5000/automation-ziesel/taggingclient@sha256:469817bbe0e7d690e3b562babeceadb9ff2a06493868889ff330db25b62565ea"]}, {"sizeBytes": 970911308, "names": ["docker-registry.default.svc:5000/automation-maier/autopython35_taggingclient@sha256:0f07cf8023171ff527cb020be31e52bf8909d1d065b76ca015ee4ddb96132818"]}, {"sizeBytes": 881917664, "names": ["docker-registry.default.svc:5000/automation-gleim/automationapi@sha256:f1ff3b6322973219088d99bc6d12ac84ecadd4bca8374320ca9b34da1636c093", "docker-registry.default.svc:5000/automation-gleim/automationapi:latest"]}, {"sizeBytes": 881917496, "names": ["docker-registry.default.svc:5000/automation-prod/automationapi@sha256:d09850bfecd0a7ad1a8426698136395428a02e63747662c774d1728f5e4bac67"]}, {"sizeBytes": 877798923, "names": ["docker-registry.default.svc:5000/automation-gleim/dnsclient@sha256:3cfa01f1c868a1664587180e64ac66dd2bfbb20eeade1912b3c52faa91e0f24d"]}, {"sizeBytes": 877705474, "names": ["docker-registry.default.svc:5000/automation-ziesel/autopython35@sha256:05eeb14f717c95193452f75a622fe6fc2c72a3bd3a507ef0a4dbbce7b2aaf39d", "docker-registry.default.svc:5000/automation-ziesel/autopython35:latest"]}, {"sizeBytes": 877705134, "names": ["docker-registry.default.svc:5000/automation-gleim/autopython35@sha256:e1bd9cf42349635d48a0b536f90d2159f51aece7fbc5ea68a398d49df5a14051"]}, {"sizeBytes": 877705083, "names": ["docker-registry.default.svc:5000/automation-prod/autopython35@sha256:976d3441e49ff4ec24df483813259aae8024fbba5c7c2826fe0db0a50caaa443", "docker-registry.default.svc:5000/automation-prod/autopython35:latest"]}, {"sizeBytes": 877088668, "names": ["docker-registry.default.svc:5000/automation-puscasu/aciapi@sha256:845d468e9f3525ad2d3d6d9028e1d9bf2e28ee16fd8389ba6acd73d2c04ef4f8"]}, {"sizeBytes": 877019958, "names": ["docker-registry.default.svc:5000/automation-ziesel/autopython35@sha256:9f8d828e1038702124d067e0f34d1770686bfd33733d40129d407b01a2a3d501"]}, {"sizeBytes": 876955852, "names": ["docker-registry.default.svc:5000/automation-puscasu/autopython35@sha256:e580d09eaed607592156983c462eec726f019f8fb1be1c573771163a39000039", "docker-registry.default.svc:5000/automation-puscasu/autopython35:latest"]}, {"sizeBytes": 876940898, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/aciapi@sha256:1e472cd03654c9bad4d07d797a6a1ba413f82e230e84f50fced6e6a6370a0120"]}, {"sizeBytes": 873684372, "names": ["docker-registry.default.svc:5000/aida-portal-prod/aida-portal@sha256:1e78a4da50ff0c9e38132b1bc663d70cfac88931d034d9ccc59590c315b404a0"]}, {"sizeBytes": 817543822, "names": ["docker-registry.default.svc:5000/automation-paul/autopython35_networkapi@sha256:5f657c52f7a0e9d460b4623734de381e29ad3a1e116e0ef4e44bbdbc9ffe2802", "docker-registry.default.svc:5000/automation-puscasu/autopython35_networkapi@sha256:5f657c52f7a0e9d460b4623734de381e29ad3a1e116e0ef4e44bbdbc9ffe2802"]}, {"sizeBytes": 814403829, "names": ["docker-registry.default.svc:5000/automation-basisprod/autopython35_networkapi@sha256:a5f1a183bdb91fc5d5bd9104b2bccf9489edb0f363cf5e340405014415ec962d", "docker-registry.default.svc:5000/automation-develop/autopython35_networkapi@sha256:a5f1a183bdb91fc5d5bd9104b2bccf9489edb0f363cf5e340405014415ec962d", "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi@sha256:a5f1a183bdb91fc5d5bd9104b2bccf9489edb0f363cf5e340405014415ec962d"]}, {"sizeBytes": 813911535, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe", "docker-registry.default.svc:5000/automation-prod/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe"]}, {"sizeBytes": 801131361, "names": ["docker-registry.default.svc:5000/automation-qa/autopython35_networkapi@sha256:937900e66678ecbdd4c2a67aebb2dbe51f8f45173f9c0bd7d048f66555cd05e7", "docker-registry.default.svc:5000/automation-rick/autopython35_networkapi@sha256:937900e66678ecbdd4c2a67aebb2dbe51f8f45173f9c0bd7d048f66555cd05e7"]}, {"sizeBytes": 781097250, "names": ["docker-registry.default.svc:5000/automation-qa-managed-connectivity/sshclient@sha256:a12c5621256b06219c02ebfd4dc3908ade88a9638263dcedfea1d569173a85bb"]}, {"sizeBytes": 739944830, "names": ["docker-registry.default.svc:5000/automation-prod/autopython35_networkapi@sha256:3f053570ba381a03b9cd5495911d57222404a4a26bd281aaea969390086355fa"]}, {"sizeBytes": 739443496, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/autopython35_networkapi@sha256:3f6578aa5338e03926a6dd7245a2762c0a2f08b2013859110325e8e7a7ea9d73"]}, {"sizeBytes": 712599004, "names": ["docker-registry.default.svc:5000/automation-qa-managed-connectivity/taggingclient@sha256:bcfc78d3a5376b336b338852b52446fab941efec1b91af22a24d1019ef36bbe4"]}, {"sizeBytes": 712582577, "names": ["docker-registry.default.svc:5000/automation-qa-service-definitions/taggingclient@sha256:f1948b936c9faf37c257c9bbf93c50acfad542a84dc502492bde06a2e76c4ac4"]}, {"sizeBytes": 708225961, "names": ["docker-registry.default.svc:5000/cisco-call-actions/actions-prod-crm@sha256:239421d5850b00bf759ebf6bc858b5cc7141c7b62eb6b4e3a13d076fce0e677e"]}, {"sizeBytes": 699654983, "names": ["docker-registry.default.svc:5000/automation-maier-blu/aida-blu@sha256:ae9123f8d3716e1b40fff561674243fb8ed35373bc9e5b0894dfd422e1126b8f"]}, {"sizeBytes": 666631364, "names": ["docker-registry.default.svc:5000/automation-qa/autopython35@sha256:013090eb2bac433e8e46108ed2645a32b8ed3a5518e47e8cd4f85731c30d680e", "docker-registry.default.svc:5000/automation-rick/autopython35@sha256:013090eb2bac433e8e46108ed2645a32b8ed3a5518e47e8cd4f85731c30d680e"]}, {"sizeBytes": 656529916, "names": ["docker-registry.default.svc:5000/automation-prod/automationapi@sha256:79b96b8bb851671aa24ff6b8a5a654d389506b7e590f9a70795ab415755449b0", "docker-registry.default.svc:5000/automation-prod/automationapi:latest"]}, {"sizeBytes": 652501928, "names": ["docker-registry.default.svc:5000/automation-prod/aciapi@sha256:f654069ed96147bfd1e9ff88951c81434bb7f0df0fa1715edc45828d8970baa2"]}, {"sizeBytes": 652459114, "names": ["docker-registry.default.svc:5000/automation-prod/vcenterfileclient@sha256:aaadff6d011ec838b3d9f8323fdd7d43725175202fba5600a4eea6f661a3767d"]}, {"sizeBytes": 644972379, "names": ["docker-registry.default.svc:5000/automation-basisprod/autopython35_taggingclient@sha256:13327370530304fb4c510c811be51b23da7148e755249d20e148aea48b787d33", "docker-registry.default.svc:5000/automation-develop/autopython35_taggingclient@sha256:13327370530304fb4c510c811be51b23da7148e755249d20e148aea48b787d33", "docker-registry.default.svc:5000/automation-maier/autopython35_taggingclient@sha256:13327370530304fb4c510c811be51b23da7148e755249d20e148aea48b787d33"]}, {"sizeBytes": 644972333, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35@sha256:f17bc436e7176fabda06ce6188d7428d3b99120351c34a218645a6f10a5096cd", "docker-registry.default.svc:5000/automation-prod/autopython35@sha256:f17bc436e7176fabda06ce6188d7428d3b99120351c34a218645a6f10a5096cd"]}, {"sizeBytes": 644972333, "names": ["docker-registry.default.svc:5000/automation-basisprod/autopython35@sha256:a28aaaf3fb0217fb8696919e2e92ec0aae0e5041b1f0e22cf0bf9b8470af7acc", "docker-registry.default.svc:5000/automation-develop/autopython35@sha256:a28aaaf3fb0217fb8696919e2e92ec0aae0e5041b1f0e22cf0bf9b8470af7acc", "docker-registry.default.svc:5000/automation-maier/autopython35@sha256:a28aaaf3fb0217fb8696919e2e92ec0aae0e5041b1f0e22cf0bf9b8470af7acc"]}, {"sizeBytes": 644972333, "names": ["docker-registry.default.svc:5000/automation-paul/autopython35@sha256:16f1cda5b5cfaf01c6456d496df4e293fe2f1b0fef97f3af72c5873f306a3d0b", "docker-registry.default.svc:5000/automation-puscasu/autopython35@sha256:16f1cda5b5cfaf01c6456d496df4e293fe2f1b0fef97f3af72c5873f306a3d0b"]}, {"sizeBytes": 630953502, "names": ["docker-registry.default.svc:5000/automation-ziesel/autopython35@sha256:419ce44d92ab952988f39486af98a77a3651f9c97ca28a3a17d4a7f5e8c1a035"]}, {"sizeBytes": 630586455, "names": ["docker-registry.default.svc:5000/automation-prod/autopython35@sha256:21970d839221843df161275940e46894ac966ca5c8ba2bb87c4149e94f2a0d1e"]}, {"sizeBytes": 630586009, "names": ["docker.io/centos/python-35-centos7@sha256:6b2678c38563e13066437dc1441bd5ba656dddf0c82a96dba5a7bdf3637bb328", "docker.io/centos/python-35-centos7:latest"]}, {"sizeBytes": 627139161, "names": ["docker-registry.default.svc:5000/openshift/python@sha256:1bc3d136fcfcdf0745c8ef25b4a8519e6a690a80129f3f26738b4978d0f1b421", "registry.access.redhat.com/rhscl/python-35-rhel7@sha256:1bc3d136fcfcdf0745c8ef25b4a8519e6a690a80129f3f26738b4978d0f1b421"]}, {"sizeBytes": 553856140, "names": ["docker-registry.default.svc:5000/cisco-call-actions/notesphonenumbers@sha256:3872f9cd310fb6e37a4d4b54800146564a94bffb7a9a3151b2448c2792d9c459"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-10-29T07:43:26Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:52:35Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-10-29T07:43:26Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:52:35Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T10:39:45Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:52:35Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "True", "lastTransitionTime": "2019-01-09T14:52:35Z", "reason": "KubeletReady", "lastHeartbeatTime": "2019-01-09T14:52:35Z", "message": "kubelet is posting ready status", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:43:35Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:52:35Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node10.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node10.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-FFM-KL75", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node10.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871705", "creationTimestamp": "2018-07-18T14:21:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node10.os.ad.scanplus.de", "uid": "cf620e53-8a95-11e8-a1e7-005056aa3492"}}]}}\n', '') ok: [sp-os-node10.os.ad.scanplus.de -> sp-os-master01.os.ad.scanplus.de] => { "attempts": 3, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node10.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node10.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-07-18T14:21:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node10.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "even", "zone": "RZ-FFM-KL75" }, "name": "sp-os-node10.os.ad.scanplus.de", "resourceVersion": "93871705", "selfLink": "/api/v1/nodes/sp-os-node10.os.ad.scanplus.de", "uid": "cf620e53-8a95-11e8-a1e7-005056aa3492" }, "spec": { "externalID": "sp-os-node10.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.29.80.171", "type": "InternalIP" }, { "address": "sp-os-node10.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147444Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249844Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:52:35Z", "lastTransitionTime": "2018-10-29T07:43:26Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:52:35Z", "lastTransitionTime": "2018-10-29T07:43:26Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:35Z", "lastTransitionTime": "2019-01-09T10:39:45Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:35Z", "lastTransitionTime": "2019-01-09T14:52:35Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:52:35Z", "lastTransitionTime": "2018-09-13T22:43:35Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "docker-registry.default.svc:5000/automation-prod/networkapi@sha256:939250e885495554aa60d095d25b69ddf94327cb99ed0664e9226f8eab700ac8" ], "sizeBytes": 1863487627 }, { "names": [ "docker-registry.default.svc:5000/aidablu-prod/aidabluworkflows@sha256:371a3882b41c78020bafb81fd8181f7081356fe89c75aa77879f9a21f644616a" ], "sizeBytes": 1272710652 }, { "names": [ "docker-registry.default.svc:5000/aidablu-qa/aidabluworkflows@sha256:4f00ff28a542d825ebea14278fbee18453b0aa40c390e4182f26269818c3ca29" ], "sizeBytes": 1237078155 }, { "names": [ "docker-registry.default.svc:5000/automation-rapp/networkapi@sha256:3f6e6973075e50cee5a1f77a2a2538b4e9115dbb12acea3c21b1c5026acb4b3d" ], "sizeBytes": 1201295971 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0" ], "sizeBytes": 1196488450 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/networkapi@sha256:8a6b7b7ce89c5442370f95b5c4d511d632bb28d3fde98713c3e50b6d5f928143", "docker-registry.default.svc:5000/automation-prod/networkapi:latest" ], "sizeBytes": 1169385027 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:3591fcdb645cec07f197df6fb74aab7eb867a6882f8318c9fd2de3fed057b9ba" ], "sizeBytes": 1168831034 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:42dcde1f18de3f984a0503e293978514acecad998a9c934004c32e2576f7996a" ], "sizeBytes": 1168830307 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:74a40bcbfb6deff62d8f4c477746835ad72ebd2e45e0e763cd0502bf899d1cbc" ], "sizeBytes": 1168826497 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/sshclient@sha256:65407d6eba4538bc657c7fddbb4146acf9d2991535792bd45629791274a712d2", "docker-registry.default.svc:5000/automation-prod/sshclient:latest" ], "sizeBytes": 1056150022 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/autopython35_sshclient@sha256:6adb614ea21893bcdde493c0a400030b3d266a1ea5d39d33b2c1dbe69d33b35e" ], "sizeBytes": 1056004369 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/autopython35_sshclient@sha256:5c3b3d4922d61fa038bb30a79bb8661c77430d068a5d1118c1d7519562c3d536" ], "sizeBytes": 1054461367 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/taggingclient@sha256:469817bbe0e7d690e3b562babeceadb9ff2a06493868889ff330db25b62565ea" ], "sizeBytes": 972013603 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/autopython35_taggingclient@sha256:0f07cf8023171ff527cb020be31e52bf8909d1d065b76ca015ee4ddb96132818" ], "sizeBytes": 970911308 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/automationapi@sha256:f1ff3b6322973219088d99bc6d12ac84ecadd4bca8374320ca9b34da1636c093", "docker-registry.default.svc:5000/automation-gleim/automationapi:latest" ], "sizeBytes": 881917664 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/automationapi@sha256:d09850bfecd0a7ad1a8426698136395428a02e63747662c774d1728f5e4bac67" ], "sizeBytes": 881917496 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/dnsclient@sha256:3cfa01f1c868a1664587180e64ac66dd2bfbb20eeade1912b3c52faa91e0f24d" ], "sizeBytes": 877798923 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/autopython35@sha256:05eeb14f717c95193452f75a622fe6fc2c72a3bd3a507ef0a4dbbce7b2aaf39d", "docker-registry.default.svc:5000/automation-ziesel/autopython35:latest" ], "sizeBytes": 877705474 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/autopython35@sha256:e1bd9cf42349635d48a0b536f90d2159f51aece7fbc5ea68a398d49df5a14051" ], "sizeBytes": 877705134 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/autopython35@sha256:976d3441e49ff4ec24df483813259aae8024fbba5c7c2826fe0db0a50caaa443", "docker-registry.default.svc:5000/automation-prod/autopython35:latest" ], "sizeBytes": 877705083 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/aciapi@sha256:845d468e9f3525ad2d3d6d9028e1d9bf2e28ee16fd8389ba6acd73d2c04ef4f8" ], "sizeBytes": 877088668 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/autopython35@sha256:9f8d828e1038702124d067e0f34d1770686bfd33733d40129d407b01a2a3d501" ], "sizeBytes": 877019958 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/autopython35@sha256:e580d09eaed607592156983c462eec726f019f8fb1be1c573771163a39000039", "docker-registry.default.svc:5000/automation-puscasu/autopython35:latest" ], "sizeBytes": 876955852 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/aciapi@sha256:1e472cd03654c9bad4d07d797a6a1ba413f82e230e84f50fced6e6a6370a0120" ], "sizeBytes": 876940898 }, { "names": [ "docker-registry.default.svc:5000/aida-portal-prod/aida-portal@sha256:1e78a4da50ff0c9e38132b1bc663d70cfac88931d034d9ccc59590c315b404a0" ], "sizeBytes": 873684372 }, { "names": [ "docker-registry.default.svc:5000/automation-paul/autopython35_networkapi@sha256:5f657c52f7a0e9d460b4623734de381e29ad3a1e116e0ef4e44bbdbc9ffe2802", "docker-registry.default.svc:5000/automation-puscasu/autopython35_networkapi@sha256:5f657c52f7a0e9d460b4623734de381e29ad3a1e116e0ef4e44bbdbc9ffe2802" ], "sizeBytes": 817543822 }, { "names": [ "docker-registry.default.svc:5000/automation-basisprod/autopython35_networkapi@sha256:a5f1a183bdb91fc5d5bd9104b2bccf9489edb0f363cf5e340405014415ec962d", "docker-registry.default.svc:5000/automation-develop/autopython35_networkapi@sha256:a5f1a183bdb91fc5d5bd9104b2bccf9489edb0f363cf5e340405014415ec962d", "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi@sha256:a5f1a183bdb91fc5d5bd9104b2bccf9489edb0f363cf5e340405014415ec962d" ], "sizeBytes": 814403829 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe", "docker-registry.default.svc:5000/automation-prod/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe" ], "sizeBytes": 813911535 }, { "names": [ "docker-registry.default.svc:5000/automation-qa/autopython35_networkapi@sha256:937900e66678ecbdd4c2a67aebb2dbe51f8f45173f9c0bd7d048f66555cd05e7", "docker-registry.default.svc:5000/automation-rick/autopython35_networkapi@sha256:937900e66678ecbdd4c2a67aebb2dbe51f8f45173f9c0bd7d048f66555cd05e7" ], "sizeBytes": 801131361 }, { "names": [ "docker-registry.default.svc:5000/automation-qa-managed-connectivity/sshclient@sha256:a12c5621256b06219c02ebfd4dc3908ade88a9638263dcedfea1d569173a85bb" ], "sizeBytes": 781097250 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/autopython35_networkapi@sha256:3f053570ba381a03b9cd5495911d57222404a4a26bd281aaea969390086355fa" ], "sizeBytes": 739944830 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/autopython35_networkapi@sha256:3f6578aa5338e03926a6dd7245a2762c0a2f08b2013859110325e8e7a7ea9d73" ], "sizeBytes": 739443496 }, { "names": [ "docker-registry.default.svc:5000/automation-qa-managed-connectivity/taggingclient@sha256:bcfc78d3a5376b336b338852b52446fab941efec1b91af22a24d1019ef36bbe4" ], "sizeBytes": 712599004 }, { "names": [ "docker-registry.default.svc:5000/automation-qa-service-definitions/taggingclient@sha256:f1948b936c9faf37c257c9bbf93c50acfad542a84dc502492bde06a2e76c4ac4" ], "sizeBytes": 712582577 }, { "names": [ "docker-registry.default.svc:5000/cisco-call-actions/actions-prod-crm@sha256:239421d5850b00bf759ebf6bc858b5cc7141c7b62eb6b4e3a13d076fce0e677e" ], "sizeBytes": 708225961 }, { "names": [ "docker-registry.default.svc:5000/automation-maier-blu/aida-blu@sha256:ae9123f8d3716e1b40fff561674243fb8ed35373bc9e5b0894dfd422e1126b8f" ], "sizeBytes": 699654983 }, { "names": [ "docker-registry.default.svc:5000/automation-qa/autopython35@sha256:013090eb2bac433e8e46108ed2645a32b8ed3a5518e47e8cd4f85731c30d680e", "docker-registry.default.svc:5000/automation-rick/autopython35@sha256:013090eb2bac433e8e46108ed2645a32b8ed3a5518e47e8cd4f85731c30d680e" ], "sizeBytes": 666631364 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/automationapi@sha256:79b96b8bb851671aa24ff6b8a5a654d389506b7e590f9a70795ab415755449b0", "docker-registry.default.svc:5000/automation-prod/automationapi:latest" ], "sizeBytes": 656529916 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/aciapi@sha256:f654069ed96147bfd1e9ff88951c81434bb7f0df0fa1715edc45828d8970baa2" ], "sizeBytes": 652501928 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/vcenterfileclient@sha256:aaadff6d011ec838b3d9f8323fdd7d43725175202fba5600a4eea6f661a3767d" ], "sizeBytes": 652459114 }, { "names": [ "docker-registry.default.svc:5000/automation-basisprod/autopython35_taggingclient@sha256:13327370530304fb4c510c811be51b23da7148e755249d20e148aea48b787d33", "docker-registry.default.svc:5000/automation-develop/autopython35_taggingclient@sha256:13327370530304fb4c510c811be51b23da7148e755249d20e148aea48b787d33", "docker-registry.default.svc:5000/automation-maier/autopython35_taggingclient@sha256:13327370530304fb4c510c811be51b23da7148e755249d20e148aea48b787d33" ], "sizeBytes": 644972379 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35@sha256:f17bc436e7176fabda06ce6188d7428d3b99120351c34a218645a6f10a5096cd", "docker-registry.default.svc:5000/automation-prod/autopython35@sha256:f17bc436e7176fabda06ce6188d7428d3b99120351c34a218645a6f10a5096cd" ], "sizeBytes": 644972333 }, { "names": [ "docker-registry.default.svc:5000/automation-basisprod/autopython35@sha256:a28aaaf3fb0217fb8696919e2e92ec0aae0e5041b1f0e22cf0bf9b8470af7acc", "docker-registry.default.svc:5000/automation-develop/autopython35@sha256:a28aaaf3fb0217fb8696919e2e92ec0aae0e5041b1f0e22cf0bf9b8470af7acc", "docker-registry.default.svc:5000/automation-maier/autopython35@sha256:a28aaaf3fb0217fb8696919e2e92ec0aae0e5041b1f0e22cf0bf9b8470af7acc" ], "sizeBytes": 644972333 }, { "names": [ "docker-registry.default.svc:5000/automation-paul/autopython35@sha256:16f1cda5b5cfaf01c6456d496df4e293fe2f1b0fef97f3af72c5873f306a3d0b", "docker-registry.default.svc:5000/automation-puscasu/autopython35@sha256:16f1cda5b5cfaf01c6456d496df4e293fe2f1b0fef97f3af72c5873f306a3d0b" ], "sizeBytes": 644972333 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/autopython35@sha256:419ce44d92ab952988f39486af98a77a3651f9c97ca28a3a17d4a7f5e8c1a035" ], "sizeBytes": 630953502 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/autopython35@sha256:21970d839221843df161275940e46894ac966ca5c8ba2bb87c4149e94f2a0d1e" ], "sizeBytes": 630586455 }, { "names": [ "docker.io/centos/python-35-centos7@sha256:6b2678c38563e13066437dc1441bd5ba656dddf0c82a96dba5a7bdf3637bb328", "docker.io/centos/python-35-centos7:latest" ], "sizeBytes": 630586009 }, { "names": [ "docker-registry.default.svc:5000/openshift/python@sha256:1bc3d136fcfcdf0745c8ef25b4a8519e6a690a80129f3f26738b4978d0f1b421", "registry.access.redhat.com/rhscl/python-35-rhel7@sha256:1bc3d136fcfcdf0745c8ef25b4a8519e6a690a80129f3f26738b4978d0f1b421" ], "sizeBytes": 627139161 }, { "names": [ "docker-registry.default.svc:5000/cisco-call-actions/notesphonenumbers@sha256:3872f9cd310fb6e37a4d4b54800146564a94bffb7a9a3151b2448c2792d9c459" ], "sizeBytes": 553856140 } ], "nodeInfo": { "architecture": "amd64", "bootID": "44c5e924-28ab-49a9-9b3e-315d4c607e62", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "420A2897-ACF0-7164-7E1B-287C3D5CBEB8" } } } ], "returncode": 0 }, "state": "list" } META: ran handlers META: ran handlers PLAY [Restart nodes] ******************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [restart node] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:32 Wednesday 09 January 2019 15:52:37 +0100 (0:00:11.406) 0:13:11.586 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node11.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TimeoutStopUSec": "1min 30s", "ControlGroup": "/system.slice/atomic-openshift-node.service", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ActiveExitTimestamp": "Wed 2019-01-09 14:53:38 CET", "ExecMainCode": "0", "UnitFileState": "enabled", "ExecMainPID": "107042", "LimitSIGPENDING": "63382", "FileDescriptorStoreMax": "0", "LoadState": "loaded", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "6848541004080", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "6848541004169", "StandardError": "inherit", "AssertTimestamp": "Wed 2019-01-09 14:53:38 CET", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "InactiveExitTimestamp": "Wed 2019-01-09 14:53:38 CET", "WatchdogTimestamp": "Wed 2019-01-09 14:53:38 CET", "NoNewPrivileges": "no", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "Before": "multi-user.target shutdown.target", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "6848540439815", "SendSIGHUP": "no", "TimeoutStartUSec": "5min", "Type": "notify", "SyslogPriority": "30", "SameProcessGroup": "no", "MountFlags": "0", "LimitNPROC": "63382", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "ExecMainStartTimestamp": "Wed 2019-01-09 14:53:38 CET", "SyslogIdentifier": "atomic-openshift-node", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "-999", "Documentation": "https://github.com/openshift/origin", "StartLimitBurst": "5", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "SecureBits": "0", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "running", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestampMonotonic": "6848540437984", "MainPID": "107042", "StartupBlockIOWeight": "18446744073709551615", "ActiveEnterTimestamp": "Wed 2019-01-09 14:53:38 CET", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "active", "Nice": "0", "LimitDATA": "18446744073709551615", "UnitFilePreset": "disabled", "MemoryCurrent": "90865664", "LimitRTTIME": "18446744073709551615", "WantedBy": "multi-user.target", "TasksCurrent": "18446744073709551615", "RestartUSec": "5s", "ConditionTimestamp": "Wed 2019-01-09 14:53:38 CET", "CPUAccounting": "yes", "RemainAfterExit": "no", "RequiresMountsFor": "/var/lib/origin", "PrivateNetwork": "no", "Restart": "always", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "RefuseManualStop": "no", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "InactiveEnterTimestamp": "Wed 2019-01-09 14:53:38 CET", "StandardInput": "null", "AssertTimestampMonotonic": "6848540437984", "DefaultDependencies": "yes", "Requires": "-.mount var.mount basic.target", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "Slice": "system.slice", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "main", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "PrivateTmp": "no", "OnFailureJobMode": "replace", "AssertResult": "yes", "LimitLOCKS": "18446744073709551615", "ExecMainStartTimestampMonotonic": "6848540439737", "AllowIsolate": "no", "Wants": "system.slice docker.service dnsmasq.service", "After": "chronyd.service var.mount docker.service -.mount dnsmasq.service basic.target ntpd.service systemd-journald.socket system.slice", "FailureAction": "none", "CanIsolate": "no", "Conflicts": "shutdown.target", "StandardOutput": "journal", "WorkingDirectory": "/var/lib/origin", "InactiveEnterTimestampMonotonic": "6848540428584", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "Transient": "no", "IOScheduling": "0", "Description": "OpenShift Node", "ActiveExitTimestampMonotonic": "6848540415736", "CanReload": "no", "ControlPID": "0", "LimitNICE": "0", "BlockIOWeight": "18446744073709551615", "Names": "atomic-openshift-node.service", "ProtectSystem": "no", "PrivateDevices": "no", "Id": "atomic-openshift-node.service"}, "invocation": {"module_args": {"daemon-reload": true, "force": null, "name": "atomic-openshift-node", "enabled": null, "daemon_reload": true, "state": "restarted", "no_block": false, "user": false, "masked": null}}, "state": "started", "changed": true, "name": "atomic-openshift-node"}\n', '') changed: [sp-os-node11.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "daemon-reload": true, "daemon_reload": true, "enabled": null, "force": null, "masked": null, "name": "atomic-openshift-node", "no_block": false, "state": "restarted", "user": false } }, "name": "atomic-openshift-node", "state": "started", "status": { "ActiveEnterTimestamp": "Wed 2019-01-09 14:53:38 CET", "ActiveEnterTimestampMonotonic": "6848541004169", "ActiveExitTimestamp": "Wed 2019-01-09 14:53:38 CET", "ActiveExitTimestampMonotonic": "6848540415736", "ActiveState": "active", "After": "chronyd.service var.mount docker.service -.mount dnsmasq.service basic.target ntpd.service systemd-journald.socket system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-01-09 14:53:38 CET", "AssertTimestampMonotonic": "6848540437984", "Before": "multi-user.target shutdown.target", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-01-09 14:53:38 CET", "ConditionTimestampMonotonic": "6848540437984", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/atomic-openshift-node.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "OpenShift Node", "DevicePolicy": "auto", "Documentation": "https://github.com/openshift/origin", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "107042", "ExecMainStartTimestamp": "Wed 2019-01-09 14:53:38 CET", "ExecMainStartTimestampMonotonic": "6848540439737", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "atomic-openshift-node.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-01-09 14:53:38 CET", "InactiveEnterTimestampMonotonic": "6848540428584", "InactiveExitTimestamp": "Wed 2019-01-09 14:53:38 CET", "InactiveExitTimestampMonotonic": "6848540439815", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "63382", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "63382", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "107042", "MemoryAccounting": "yes", "MemoryCurrent": "90865664", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "atomic-openshift-node.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "-999", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "-.mount var.mount basic.target", "RequiresMountsFor": "/var/lib/origin", "Restart": "always", "RestartUSec": "5s", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogIdentifier": "atomic-openshift-node", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "system.slice docker.service dnsmasq.service", "WatchdogTimestamp": "Wed 2019-01-09 14:53:38 CET", "WatchdogTimestampMonotonic": "6848541004080", "WatchdogUSec": "0", "WorkingDirectory": "/var/lib/origin" } } TASK [Wait for node to be ready] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:38 Wednesday 09 January 2019 15:52:38 +0100 (0:00:01.395) 0:13:12.982 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node11.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node11.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249844Ki"}, "addresses": [{"type": "InternalIP", "address": "172.29.80.172"}, {"type": "Hostname", "address": "sp-os-node11.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "5fed3e64-1f88-43ce-a7d4-a11486d2aa95", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "420A99F6-3B4A-CA3B-3F8A-CBA56A5501E9", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147444Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-10-22T07:31:33Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:52:38Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-10-22T07:31:33Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:52:38Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-07T10:51:06Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:52:38Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:52:38Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:52:38Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:14:06Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:52:38Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node11.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node11.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-FFM-KL75", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node11.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871723", "creationTimestamp": "2018-07-18T14:21:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node11.os.ad.scanplus.de", "uid": "cf73336b-8a95-11e8-a1e7-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (36 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node11.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node11.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-07-18T14:21:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node11.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "odd", "zone": "RZ-FFM-KL75" }, "name": "sp-os-node11.os.ad.scanplus.de", "resourceVersion": "93871723", "selfLink": "/api/v1/nodes/sp-os-node11.os.ad.scanplus.de", "uid": "cf73336b-8a95-11e8-a1e7-005056aa3492" }, "spec": { "externalID": "sp-os-node11.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.29.80.172", "type": "InternalIP" }, { "address": "sp-os-node11.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147444Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249844Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:52:38Z", "lastTransitionTime": "2018-10-22T07:31:33Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:52:38Z", "lastTransitionTime": "2018-10-22T07:31:33Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:38Z", "lastTransitionTime": "2019-01-07T10:51:06Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:38Z", "lastTransitionTime": "2019-01-09T14:52:38Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:52:38Z", "lastTransitionTime": "2018-09-13T22:14:06Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "5fed3e64-1f88-43ce-a7d4-a11486d2aa95", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "420A99F6-3B4A-CA3B-3F8A-CBA56A5501E9" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node11.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node11.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249844Ki"}, "addresses": [{"type": "InternalIP", "address": "172.29.80.172"}, {"type": "Hostname", "address": "sp-os-node11.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "5fed3e64-1f88-43ce-a7d4-a11486d2aa95", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "420A99F6-3B4A-CA3B-3F8A-CBA56A5501E9", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147444Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2018-10-22T07:31:33Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:52:38Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-10-22T07:31:33Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:52:38Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-07T10:51:06Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:52:38Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:52:38Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:52:38Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:14:06Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:52:38Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node11.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node11.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-FFM-KL75", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node11.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871723", "creationTimestamp": "2018-07-18T14:21:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node11.os.ad.scanplus.de", "uid": "cf73336b-8a95-11e8-a1e7-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (35 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node11.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node11.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-07-18T14:21:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node11.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "odd", "zone": "RZ-FFM-KL75" }, "name": "sp-os-node11.os.ad.scanplus.de", "resourceVersion": "93871723", "selfLink": "/api/v1/nodes/sp-os-node11.os.ad.scanplus.de", "uid": "cf73336b-8a95-11e8-a1e7-005056aa3492" }, "spec": { "externalID": "sp-os-node11.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.29.80.172", "type": "InternalIP" }, { "address": "sp-os-node11.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147444Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249844Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:52:38Z", "lastTransitionTime": "2018-10-22T07:31:33Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:52:38Z", "lastTransitionTime": "2018-10-22T07:31:33Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:38Z", "lastTransitionTime": "2019-01-07T10:51:06Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:38Z", "lastTransitionTime": "2019-01-09T14:52:38Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:52:38Z", "lastTransitionTime": "2018-09-13T22:14:06Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "5fed3e64-1f88-43ce-a7d4-a11486d2aa95", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "420A99F6-3B4A-CA3B-3F8A-CBA56A5501E9" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node11.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node11.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249844Ki"}, "addresses": [{"type": "InternalIP", "address": "172.29.80.172"}, {"type": "Hostname", "address": "sp-os-node11.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "5fed3e64-1f88-43ce-a7d4-a11486d2aa95", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "420A99F6-3B4A-CA3B-3F8A-CBA56A5501E9", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147444Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1906741485, "names": ["docker-registry.default.svc:5000/automation-prodtest/networkapi@sha256:2b0d317ab7494efa8caadefb9775f8c7c4bcb5b0cd6cb2158d8f077607d2357f", "docker-registry.default.svc:5000/automation-prodtest/networkapi:latest"]}, {"sizeBytes": 1889747575, "names": ["docker-registry.default.svc:5000/automation-prodtest/autopython35_networkapi@sha256:afbdea5ea505f5a1604e9f08a4805bc749c3b3aa6ecc990517c8fce0bbd03423"]}, {"sizeBytes": 1855625982, "names": ["docker-registry.default.svc:5000/automation-haertenstein/networkapi@sha256:289eec2f67b1d806992a8c36c8588c3a798f7dc61c2f234709f63a364741b0ff"]}, {"sizeBytes": 1838643203, "names": ["docker-registry.default.svc:5000/automation-maier/autopython35_networkapi@sha256:04c1278affc74a0295704f623aac53547f0d64c7103fb0cef4a13d3387847a8b", "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi:latest"]}, {"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1247416602, "names": ["docker-registry.default.svc:5000/aidablu-test/aidabluworkflows@sha256:b9de3318443ff53ac835fd0dbe48f940359e135f62b4c183051d0cab23472cd5", "docker-registry.default.svc:5000/aidablu-test/aidabluworkflows:latest"]}, {"sizeBytes": 1237248586, "names": ["docker-registry.default.svc:5000/automation-prodtest-blu/aidabluworkflows@sha256:0629c7cf74518c0513b0ff3f8ec4d56a2dc76738b4e181bc38ac56be9934cb51"]}, {"sizeBytes": 1237236873, "names": ["docker-registry.default.svc:5000/automation-prodtest-blu/aidabluworkflows@sha256:5fb02e32b518aad0117e4917cfdaeb2f9aee8e4826226b72b14c79379b503d1a", "docker-registry.default.svc:5000/automation-prodtest-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1237095818, "names": ["docker-registry.default.svc:5000/automation-blu-qa-managed-connectivity/aidabluworkflows@sha256:0711f196c373b1d1b2685156da70d96b69eaef432736c80c03243947a0046a8b"]}, {"sizeBytes": 1196488450, "names": ["registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0"]}, {"sizeBytes": 1196480016, "names": ["registry.spdev.net/aidablu/mistral@sha256:0e82049a566de0fa322e7298a12dca8e2afc60f42c57ce1749c6d0b1418046c4", "registry.spdev.net/aidablu/mistral:latest"]}, {"sizeBytes": 1169075464, "names": ["docker-registry.default.svc:5000/automation-qa-service-definitions/networkapi@sha256:f88a5813485589d8eb61e57e5f3d1bc1c7c85d0fd9f2d7dcc7d31ae5a48f0551"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 1168835291, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:a7e965500178ce84e7dfb1abaa24f9ce55d46159e15a89c04fd771a6031c3cb5"]}, {"sizeBytes": 1168826513, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:2c9a3d44189e80b02a2e858d0f9fb2406814a8d3e5e48c4453ce5d8c6609158f", "docker-registry.default.svc:5000/automation-schoenthaler/networkapi:latest"]}, {"sizeBytes": 1168816047, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:e56ec0b6654f69776baf2d924dc1310096598e0bbf513738fb2fb58381637765"]}, {"sizeBytes": 1094862903, "names": ["docker-registry.default.svc:5000/automation-puscasu/networkapi@sha256:c038796c2a0180f6aeba3b416d6add1a9f6e2af7187d488158212b1fee7bbca4"]}, {"sizeBytes": 1066868142, "names": ["docker-registry.default.svc:5000/automation-puscasu/networkapi@sha256:91f0ccb952379d92868914c2d19e43da36591f989f03aaec040240d3cafc3bd0", "docker-registry.default.svc:5000/automation-puscasu/networkapi:latest"]}, {"sizeBytes": 1056150971, "names": ["docker-registry.default.svc:5000/automation-maier/sshclient@sha256:7e3abc9071ebfd92449dbecf5601b7f702947338a4b24981530c480795247247", "docker-registry.default.svc:5000/automation-maier/sshclient:latest"]}, {"sizeBytes": 1056150454, "names": ["docker-registry.default.svc:5000/automation-gleim/sshclient@sha256:d8d5e0db081db3c9ca0307e23947c32fd9cf72090463871bfdd0e721dc3b5e58", "docker-registry.default.svc:5000/automation-gleim/sshclient:latest"]}, {"sizeBytes": 1056150151, "names": ["docker-registry.default.svc:5000/automation-gleim/sshclient@sha256:aed4298913a4a27a028afeb9a86babacfcdb819e159b43f893c85162cb6b4bbe"]}, {"sizeBytes": 1056015106, "names": ["docker-registry.default.svc:5000/automation-maier/autopython35_sshclient@sha256:b49bbbb1d64e82a26e4475ba144e9ae326374cd46edbeffdcc38de50584f105c"]}, {"sizeBytes": 1056014512, "names": ["docker-registry.default.svc:5000/automation-gleim/autopython35_sshclient@sha256:cbe41f0927fc5263444b47834b4986e6aec3d976e7028eef82a6686a9fea76ef"]}, {"sizeBytes": 1055128691, "names": ["docker-registry.default.svc:5000/automation-rick/sshclient@sha256:91c2efff275c6b86bcfa49f7ae3d1f6978fbc13de248ac619b93ed93eece210f", "docker-registry.default.svc:5000/automation-rick/sshclient:latest"]}, {"sizeBytes": 1055019576, "names": ["docker-registry.default.svc:5000/automation-ziesel/sshclient@sha256:396ef79043fbc23e3b870da2ef6ec0f5cecdb14bf4034814e5c57cc7718d4158"]}, {"sizeBytes": 1054996981, "names": ["docker-registry.default.svc:5000/automation-rick/autopython35_sshclient@sha256:9d5c9e34ff08d503eaaf811f2b48bda9770313310d0df40ea30266e51ad85c45"]}, {"sizeBytes": 1054530019, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/sshclient@sha256:8854395b8f8795accdb824ce609a939adcbcda3d172f687d9163e323d888c3b3"]}, {"sizeBytes": 1054461367, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/autopython35_sshclient@sha256:5c3b3d4922d61fa038bb30a79bb8661c77430d068a5d1118c1d7519562c3d536"]}, {"sizeBytes": 1054358725, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35_sshclient@sha256:e127c65c55899181004a817e8f66951a69a6f25bf39cf59642885ce34b0cb3af"]}, {"sizeBytes": 1022591864, "names": ["docker-registry.default.svc:5000/automation-puscasu/autopython35_networkapi@sha256:0904670f3b5ef1b7b1900af238dccf247ee5569c39be773462669621936c22ea"]}, {"sizeBytes": 972013263, "names": ["docker-registry.default.svc:5000/automation-gleim/taggingclient@sha256:8ede292a0150c58e41f06e06bb155c88ab249e78836e543887363c6a85adb735", "docker-registry.default.svc:5000/automation-gleim/taggingclient:latest"]}, {"sizeBytes": 971853266, "names": ["docker-registry.default.svc:5000/automation-gleim/autopython35_taggingclient@sha256:888f22f3898bb50fe3ddb7eaeb753c373c747ea313c9a1ab83d00a340b23359d", "docker-registry.default.svc:5000/automation-gleim/autopython35_taggingclient:latest"]}, {"sizeBytes": 971853266, "names": ["docker-registry.default.svc:5000/automation-gleim/autopython35_taggingclient@sha256:d8e4b7138ba1bb296506cdb3ba76ee157644ad82b33c66446a6ea1bd1b88259e"]}, {"sizeBytes": 971076536, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/taggingclient@sha256:65be00ee0889d43ba2b6ccebf124a66cc2ea2ff09850ba34bb023f02651c95fe"]}, {"sizeBytes": 970910131, "names": ["docker-registry.default.svc:5000/automation-puscasu/autopython35_taggingclient@sha256:30f066fa22e85934c0b2d9734b9c3aa76e38c110765def6d120d99180c9c6677"]}, {"sizeBytes": 928193384, "names": ["docker-registry.default.svc:5000/automation-prodtest/autopython35@sha256:00ae1aa8ccf72e0f0e163fc92088c71d7f9c6e20114b2dc8ab75f94b3f6dec37"]}, {"sizeBytes": 881918004, "names": ["docker-registry.default.svc:5000/automation-ziesel/automationapi@sha256:a854b77add974ea5e9bb5096280784f9a403142078cf7c265d4553eaf5e3d893", "docker-registry.default.svc:5000/automation-ziesel/automationapi:latest"]}, {"sizeBytes": 881917496, "names": ["docker-registry.default.svc:5000/automation-prod/automationapi@sha256:d09850bfecd0a7ad1a8426698136395428a02e63747662c774d1728f5e4bac67", "docker-registry.default.svc:5000/automation-prod/automationapi:latest"]}, {"sizeBytes": 881252117, "names": ["docker-registry.default.svc:5000/automation-rick/automationapi@sha256:5d1317b0f16b468ba953e8a4e17d6c30c16f2eb3a41392706ad2878b9301b236"]}, {"sizeBytes": 881118934, "names": ["docker-registry.default.svc:5000/automation-puscasu/automationapi@sha256:322a3ee6a3a8873e10641c4e5fd2f1e8df0fd913a7f649a9c5cd19baa8cb8bb2", "docker-registry.default.svc:5000/automation-puscasu/automationapi:latest"]}, {"sizeBytes": 877803702, "names": ["docker-registry.default.svc:5000/automation-maier/dnsclient@sha256:18e12a5486ac4e521ccb83bdb4176ba7acd310caefc1d356f37b46ffee900e04"]}, {"sizeBytes": 877803527, "names": ["docker-registry.default.svc:5000/automation-gleim/dnsclient@sha256:3c24211dcd203fe842c644dec2dfb72aef6a507c840fd4d06d5c01529c7db0f3"]}, {"sizeBytes": 877794715, "names": ["docker-registry.default.svc:5000/automation-haertenstein/dnsclient@sha256:d532fb313412db2ee38f4349342743037dcecb1f9a88e95cbe69475592009400", "docker-registry.default.svc:5000/automation-haertenstein/dnsclient:latest"]}, {"sizeBytes": 877716514, "names": ["docker-registry.default.svc:5000/automation-maier/autopython35@sha256:0c71d5251991e6fde82e4fe9d2413ff2f25d092eb2f7aa968f9294dc593d4bcd"]}, {"sizeBytes": 877707459, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35@sha256:3380ef10cb9e277b8a66f83e7562b5c8ff018fb6e0767a86959caf19bb3b961a"]}, {"sizeBytes": 877705474, "names": ["docker-registry.default.svc:5000/automation-ziesel/autopython35@sha256:05eeb14f717c95193452f75a622fe6fc2c72a3bd3a507ef0a4dbbce7b2aaf39d"]}, {"sizeBytes": 877705134, "names": ["docker-registry.default.svc:5000/automation-gleim/autopython35@sha256:e1bd9cf42349635d48a0b536f90d2159f51aece7fbc5ea68a398d49df5a14051"]}, {"sizeBytes": 877705083, "names": ["docker-registry.default.svc:5000/automation-prod/autopython35@sha256:976d3441e49ff4ec24df483813259aae8024fbba5c7c2826fe0db0a50caaa443"]}, {"sizeBytes": 877176291, "names": ["docker-registry.default.svc:5000/automation-rick/dnsclient@sha256:47743e2253279980acd6ce769070d9031f669f53b4f3ec60c6817eeacd3ba60b"]}, {"sizeBytes": 877174775, "names": ["docker-registry.default.svc:5000/automation-rick/ftpclient@sha256:dbe9ac3ab7949cd3570349fbe64f78b8d01a9f2e7f18922914e57947605dec7b"]}], "conditions": [{"status": "False", "lastTransitionTime": "2018-10-22T07:31:33Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:52:48Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2018-10-22T07:31:33Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:52:48Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-07T10:51:06Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:52:48Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "True", "lastTransitionTime": "2019-01-09T14:52:48Z", "reason": "KubeletReady", "lastHeartbeatTime": "2019-01-09T14:52:48Z", "message": "kubelet is posting ready status", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:14:06Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:52:48Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node11.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node11.os.ad.scanplus.de", "labels": {"update.group": "odd", "logging-infra-fluentd": "true", "zone": "RZ-FFM-KL75", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node11.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871778", "creationTimestamp": "2018-07-18T14:21:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node11.os.ad.scanplus.de", "uid": "cf73336b-8a95-11e8-a1e7-005056aa3492"}}]}}\n', '') ok: [sp-os-node11.os.ad.scanplus.de -> sp-os-master01.os.ad.scanplus.de] => { "attempts": 3, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node11.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node11.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-07-18T14:21:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node11.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "odd", "zone": "RZ-FFM-KL75" }, "name": "sp-os-node11.os.ad.scanplus.de", "resourceVersion": "93871778", "selfLink": "/api/v1/nodes/sp-os-node11.os.ad.scanplus.de", "uid": "cf73336b-8a95-11e8-a1e7-005056aa3492" }, "spec": { "externalID": "sp-os-node11.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.29.80.172", "type": "InternalIP" }, { "address": "sp-os-node11.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147444Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249844Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:52:48Z", "lastTransitionTime": "2018-10-22T07:31:33Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:52:48Z", "lastTransitionTime": "2018-10-22T07:31:33Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:48Z", "lastTransitionTime": "2019-01-07T10:51:06Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:48Z", "lastTransitionTime": "2019-01-09T14:52:48Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:52:48Z", "lastTransitionTime": "2018-09-13T22:14:06Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "docker-registry.default.svc:5000/automation-prodtest/networkapi@sha256:2b0d317ab7494efa8caadefb9775f8c7c4bcb5b0cd6cb2158d8f077607d2357f", "docker-registry.default.svc:5000/automation-prodtest/networkapi:latest" ], "sizeBytes": 1906741485 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/autopython35_networkapi@sha256:afbdea5ea505f5a1604e9f08a4805bc749c3b3aa6ecc990517c8fce0bbd03423" ], "sizeBytes": 1889747575 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/networkapi@sha256:289eec2f67b1d806992a8c36c8588c3a798f7dc61c2f234709f63a364741b0ff" ], "sizeBytes": 1855625982 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi@sha256:04c1278affc74a0295704f623aac53547f0d64c7103fb0cef4a13d3387847a8b", "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi:latest" ], "sizeBytes": 1838643203 }, { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "docker-registry.default.svc:5000/aidablu-test/aidabluworkflows@sha256:b9de3318443ff53ac835fd0dbe48f940359e135f62b4c183051d0cab23472cd5", "docker-registry.default.svc:5000/aidablu-test/aidabluworkflows:latest" ], "sizeBytes": 1247416602 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest-blu/aidabluworkflows@sha256:0629c7cf74518c0513b0ff3f8ec4d56a2dc76738b4e181bc38ac56be9934cb51" ], "sizeBytes": 1237248586 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest-blu/aidabluworkflows@sha256:5fb02e32b518aad0117e4917cfdaeb2f9aee8e4826226b72b14c79379b503d1a", "docker-registry.default.svc:5000/automation-prodtest-blu/aidabluworkflows:latest" ], "sizeBytes": 1237236873 }, { "names": [ "docker-registry.default.svc:5000/automation-blu-qa-managed-connectivity/aidabluworkflows@sha256:0711f196c373b1d1b2685156da70d96b69eaef432736c80c03243947a0046a8b" ], "sizeBytes": 1237095818 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0" ], "sizeBytes": 1196488450 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:0e82049a566de0fa322e7298a12dca8e2afc60f42c57ce1749c6d0b1418046c4", "registry.spdev.net/aidablu/mistral:latest" ], "sizeBytes": 1196480016 }, { "names": [ "docker-registry.default.svc:5000/automation-qa-service-definitions/networkapi@sha256:f88a5813485589d8eb61e57e5f3d1bc1c7c85d0fd9f2d7dcc7d31ae5a48f0551" ], "sizeBytes": 1169075464 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:a7e965500178ce84e7dfb1abaa24f9ce55d46159e15a89c04fd771a6031c3cb5" ], "sizeBytes": 1168835291 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:2c9a3d44189e80b02a2e858d0f9fb2406814a8d3e5e48c4453ce5d8c6609158f", "docker-registry.default.svc:5000/automation-schoenthaler/networkapi:latest" ], "sizeBytes": 1168826513 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:e56ec0b6654f69776baf2d924dc1310096598e0bbf513738fb2fb58381637765" ], "sizeBytes": 1168816047 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/networkapi@sha256:c038796c2a0180f6aeba3b416d6add1a9f6e2af7187d488158212b1fee7bbca4" ], "sizeBytes": 1094862903 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/networkapi@sha256:91f0ccb952379d92868914c2d19e43da36591f989f03aaec040240d3cafc3bd0", "docker-registry.default.svc:5000/automation-puscasu/networkapi:latest" ], "sizeBytes": 1066868142 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/sshclient@sha256:7e3abc9071ebfd92449dbecf5601b7f702947338a4b24981530c480795247247", "docker-registry.default.svc:5000/automation-maier/sshclient:latest" ], "sizeBytes": 1056150971 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/sshclient@sha256:d8d5e0db081db3c9ca0307e23947c32fd9cf72090463871bfdd0e721dc3b5e58", "docker-registry.default.svc:5000/automation-gleim/sshclient:latest" ], "sizeBytes": 1056150454 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/sshclient@sha256:aed4298913a4a27a028afeb9a86babacfcdb819e159b43f893c85162cb6b4bbe" ], "sizeBytes": 1056150151 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/autopython35_sshclient@sha256:b49bbbb1d64e82a26e4475ba144e9ae326374cd46edbeffdcc38de50584f105c" ], "sizeBytes": 1056015106 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/autopython35_sshclient@sha256:cbe41f0927fc5263444b47834b4986e6aec3d976e7028eef82a6686a9fea76ef" ], "sizeBytes": 1056014512 }, { "names": [ "docker-registry.default.svc:5000/automation-rick/sshclient@sha256:91c2efff275c6b86bcfa49f7ae3d1f6978fbc13de248ac619b93ed93eece210f", "docker-registry.default.svc:5000/automation-rick/sshclient:latest" ], "sizeBytes": 1055128691 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/sshclient@sha256:396ef79043fbc23e3b870da2ef6ec0f5cecdb14bf4034814e5c57cc7718d4158" ], "sizeBytes": 1055019576 }, { "names": [ "docker-registry.default.svc:5000/automation-rick/autopython35_sshclient@sha256:9d5c9e34ff08d503eaaf811f2b48bda9770313310d0df40ea30266e51ad85c45" ], "sizeBytes": 1054996981 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/sshclient@sha256:8854395b8f8795accdb824ce609a939adcbcda3d172f687d9163e323d888c3b3" ], "sizeBytes": 1054530019 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/autopython35_sshclient@sha256:5c3b3d4922d61fa038bb30a79bb8661c77430d068a5d1118c1d7519562c3d536" ], "sizeBytes": 1054461367 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35_sshclient@sha256:e127c65c55899181004a817e8f66951a69a6f25bf39cf59642885ce34b0cb3af" ], "sizeBytes": 1054358725 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/autopython35_networkapi@sha256:0904670f3b5ef1b7b1900af238dccf247ee5569c39be773462669621936c22ea" ], "sizeBytes": 1022591864 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/taggingclient@sha256:8ede292a0150c58e41f06e06bb155c88ab249e78836e543887363c6a85adb735", "docker-registry.default.svc:5000/automation-gleim/taggingclient:latest" ], "sizeBytes": 972013263 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/autopython35_taggingclient@sha256:888f22f3898bb50fe3ddb7eaeb753c373c747ea313c9a1ab83d00a340b23359d", "docker-registry.default.svc:5000/automation-gleim/autopython35_taggingclient:latest" ], "sizeBytes": 971853266 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/autopython35_taggingclient@sha256:d8e4b7138ba1bb296506cdb3ba76ee157644ad82b33c66446a6ea1bd1b88259e" ], "sizeBytes": 971853266 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/taggingclient@sha256:65be00ee0889d43ba2b6ccebf124a66cc2ea2ff09850ba34bb023f02651c95fe" ], "sizeBytes": 971076536 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/autopython35_taggingclient@sha256:30f066fa22e85934c0b2d9734b9c3aa76e38c110765def6d120d99180c9c6677" ], "sizeBytes": 970910131 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/autopython35@sha256:00ae1aa8ccf72e0f0e163fc92088c71d7f9c6e20114b2dc8ab75f94b3f6dec37" ], "sizeBytes": 928193384 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/automationapi@sha256:a854b77add974ea5e9bb5096280784f9a403142078cf7c265d4553eaf5e3d893", "docker-registry.default.svc:5000/automation-ziesel/automationapi:latest" ], "sizeBytes": 881918004 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/automationapi@sha256:d09850bfecd0a7ad1a8426698136395428a02e63747662c774d1728f5e4bac67", "docker-registry.default.svc:5000/automation-prod/automationapi:latest" ], "sizeBytes": 881917496 }, { "names": [ "docker-registry.default.svc:5000/automation-rick/automationapi@sha256:5d1317b0f16b468ba953e8a4e17d6c30c16f2eb3a41392706ad2878b9301b236" ], "sizeBytes": 881252117 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/automationapi@sha256:322a3ee6a3a8873e10641c4e5fd2f1e8df0fd913a7f649a9c5cd19baa8cb8bb2", "docker-registry.default.svc:5000/automation-puscasu/automationapi:latest" ], "sizeBytes": 881118934 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/dnsclient@sha256:18e12a5486ac4e521ccb83bdb4176ba7acd310caefc1d356f37b46ffee900e04" ], "sizeBytes": 877803702 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/dnsclient@sha256:3c24211dcd203fe842c644dec2dfb72aef6a507c840fd4d06d5c01529c7db0f3" ], "sizeBytes": 877803527 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/dnsclient@sha256:d532fb313412db2ee38f4349342743037dcecb1f9a88e95cbe69475592009400", "docker-registry.default.svc:5000/automation-haertenstein/dnsclient:latest" ], "sizeBytes": 877794715 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/autopython35@sha256:0c71d5251991e6fde82e4fe9d2413ff2f25d092eb2f7aa968f9294dc593d4bcd" ], "sizeBytes": 877716514 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35@sha256:3380ef10cb9e277b8a66f83e7562b5c8ff018fb6e0767a86959caf19bb3b961a" ], "sizeBytes": 877707459 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/autopython35@sha256:05eeb14f717c95193452f75a622fe6fc2c72a3bd3a507ef0a4dbbce7b2aaf39d" ], "sizeBytes": 877705474 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/autopython35@sha256:e1bd9cf42349635d48a0b536f90d2159f51aece7fbc5ea68a398d49df5a14051" ], "sizeBytes": 877705134 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/autopython35@sha256:976d3441e49ff4ec24df483813259aae8024fbba5c7c2826fe0db0a50caaa443" ], "sizeBytes": 877705083 }, { "names": [ "docker-registry.default.svc:5000/automation-rick/dnsclient@sha256:47743e2253279980acd6ce769070d9031f669f53b4f3ec60c6817eeacd3ba60b" ], "sizeBytes": 877176291 }, { "names": [ "docker-registry.default.svc:5000/automation-rick/ftpclient@sha256:dbe9ac3ab7949cd3570349fbe64f78b8d01a9f2e7f18922914e57947605dec7b" ], "sizeBytes": 877174775 } ], "nodeInfo": { "architecture": "amd64", "bootID": "5fed3e64-1f88-43ce-a7d4-a11486d2aa95", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "420A99F6-3B4A-CA3B-3F8A-CBA56A5501E9" } } } ], "returncode": 0 }, "state": "list" } META: ran handlers META: ran handlers PLAY [Restart nodes] ******************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [restart node] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:32 Wednesday 09 January 2019 15:52:50 +0100 (0:00:11.451) 0:13:24.434 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-node12.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TimeoutStopUSec": "1min 30s", "ControlGroup": "/system.slice/atomic-openshift-node.service", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ActiveExitTimestamp": "Wed 2019-01-09 14:53:51 CET", "ExecMainCode": "0", "UnitFileState": "enabled", "ExecMainPID": "103476", "LimitSIGPENDING": "63382", "FileDescriptorStoreMax": "0", "LoadState": "loaded", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "10162198908599", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "10162198908720", "StandardError": "inherit", "AssertTimestamp": "Wed 2019-01-09 14:53:51 CET", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "InactiveExitTimestamp": "Wed 2019-01-09 14:53:51 CET", "WatchdogTimestamp": "Wed 2019-01-09 14:53:51 CET", "NoNewPrivileges": "no", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "Before": "shutdown.target multi-user.target", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "10162198404481", "SendSIGHUP": "no", "TimeoutStartUSec": "5min", "Type": "notify", "SyslogPriority": "30", "SameProcessGroup": "no", "MountFlags": "0", "LimitNPROC": "63382", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "ExecMainStartTimestamp": "Wed 2019-01-09 14:53:51 CET", "SyslogIdentifier": "atomic-openshift-node", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "-999", "Documentation": "https://github.com/openshift/origin", "StartLimitBurst": "5", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "SecureBits": "0", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "running", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestampMonotonic": "10162198402379", "MainPID": "103476", "StartupBlockIOWeight": "18446744073709551615", "ActiveEnterTimestamp": "Wed 2019-01-09 14:53:51 CET", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "active", "Nice": "0", "LimitDATA": "18446744073709551615", "UnitFilePreset": "disabled", "MemoryCurrent": "105558016", "LimitRTTIME": "18446744073709551615", "WantedBy": "multi-user.target", "TasksCurrent": "18446744073709551615", "RestartUSec": "5s", "ConditionTimestamp": "Wed 2019-01-09 14:53:51 CET", "CPUAccounting": "yes", "RemainAfterExit": "no", "RequiresMountsFor": "/var/lib/origin", "PrivateNetwork": "no", "Restart": "always", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "RefuseManualStop": "no", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "InactiveEnterTimestamp": "Wed 2019-01-09 14:53:51 CET", "StandardInput": "null", "AssertTimestampMonotonic": "10162198402379", "DefaultDependencies": "yes", "Requires": "-.mount var.mount basic.target", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "Slice": "system.slice", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "main", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "PrivateTmp": "no", "OnFailureJobMode": "replace", "AssertResult": "yes", "LimitLOCKS": "18446744073709551615", "ExecMainStartTimestampMonotonic": "10162198404418", "AllowIsolate": "no", "Wants": "docker.service dnsmasq.service system.slice", "After": "ntpd.service systemd-journald.socket chronyd.service -.mount dnsmasq.service system.slice docker.service var.mount basic.target", "FailureAction": "none", "CanIsolate": "no", "Conflicts": "shutdown.target", "StandardOutput": "journal", "WorkingDirectory": "/var/lib/origin", "InactiveEnterTimestampMonotonic": "10162198391241", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "Transient": "no", "IOScheduling": "0", "Description": "OpenShift Node", "ActiveExitTimestampMonotonic": "10162198367669", "CanReload": "no", "ControlPID": "0", "LimitNICE": "0", "BlockIOWeight": "18446744073709551615", "Names": "atomic-openshift-node.service", "ProtectSystem": "no", "PrivateDevices": "no", "Id": "atomic-openshift-node.service"}, "invocation": {"module_args": {"daemon-reload": true, "force": null, "name": "atomic-openshift-node", "enabled": null, "daemon_reload": true, "state": "restarted", "no_block": false, "user": false, "masked": null}}, "state": "started", "changed": true, "name": "atomic-openshift-node"}\n', '') changed: [sp-os-node12.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "daemon-reload": true, "daemon_reload": true, "enabled": null, "force": null, "masked": null, "name": "atomic-openshift-node", "no_block": false, "state": "restarted", "user": false } }, "name": "atomic-openshift-node", "state": "started", "status": { "ActiveEnterTimestamp": "Wed 2019-01-09 14:53:51 CET", "ActiveEnterTimestampMonotonic": "10162198908720", "ActiveExitTimestamp": "Wed 2019-01-09 14:53:51 CET", "ActiveExitTimestampMonotonic": "10162198367669", "ActiveState": "active", "After": "ntpd.service systemd-journald.socket chronyd.service -.mount dnsmasq.service system.slice docker.service var.mount basic.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-01-09 14:53:51 CET", "AssertTimestampMonotonic": "10162198402379", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-01-09 14:53:51 CET", "ConditionTimestampMonotonic": "10162198402379", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/atomic-openshift-node.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "OpenShift Node", "DevicePolicy": "auto", "Documentation": "https://github.com/openshift/origin", "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "103476", "ExecMainStartTimestamp": "Wed 2019-01-09 14:53:51 CET", "ExecMainStartTimestampMonotonic": "10162198404418", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "atomic-openshift-node.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2019-01-09 14:53:51 CET", "InactiveEnterTimestampMonotonic": "10162198391241", "InactiveExitTimestamp": "Wed 2019-01-09 14:53:51 CET", "InactiveExitTimestampMonotonic": "10162198404481", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "63382", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "63382", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "103476", "MemoryAccounting": "yes", "MemoryCurrent": "105558016", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "atomic-openshift-node.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "-999", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "-.mount var.mount basic.target", "RequiresMountsFor": "/var/lib/origin", "Restart": "always", "RestartUSec": "5s", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogIdentifier": "atomic-openshift-node", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker.service dnsmasq.service system.slice", "WatchdogTimestamp": "Wed 2019-01-09 14:53:51 CET", "WatchdogTimestampMonotonic": "10162198908599", "WatchdogUSec": "0", "WorkingDirectory": "/var/lib/origin" } } TASK [Wait for node to be ready] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:38 Wednesday 09 January 2019 15:52:51 +0100 (0:00:01.224) 0:13:25.658 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node12.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node12.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249836Ki"}, "addresses": [{"type": "InternalIP", "address": "172.29.80.173"}, {"type": "Hostname", "address": "sp-os-node12.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "bcc41e59-2267-4fba-8999-9d983e503886", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "420A8C4A-345D-F026-5AA4-FAA908BB81B5", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147436Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2019-01-09T07:15:08Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:52:51Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2019-01-09T07:15:08Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:52:51Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T09:59:08Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:52:51Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:52:51Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:52:51Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:46:57Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:52:51Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node12.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node12.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-FFM-KL75", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node12.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871802", "creationTimestamp": "2018-07-18T14:21:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node12.os.ad.scanplus.de", "uid": "cf65ab9b-8a95-11e8-a1e7-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (36 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node12.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node12.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-07-18T14:21:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node12.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "even", "zone": "RZ-FFM-KL75" }, "name": "sp-os-node12.os.ad.scanplus.de", "resourceVersion": "93871802", "selfLink": "/api/v1/nodes/sp-os-node12.os.ad.scanplus.de", "uid": "cf65ab9b-8a95-11e8-a1e7-005056aa3492" }, "spec": { "externalID": "sp-os-node12.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.29.80.173", "type": "InternalIP" }, { "address": "sp-os-node12.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147436Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249836Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:52:51Z", "lastTransitionTime": "2019-01-09T07:15:08Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:52:51Z", "lastTransitionTime": "2019-01-09T07:15:08Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:51Z", "lastTransitionTime": "2019-01-09T09:59:08Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:51Z", "lastTransitionTime": "2019-01-09T14:52:51Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:52:51Z", "lastTransitionTime": "2018-09-13T22:46:57Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "bcc41e59-2267-4fba-8999-9d983e503886", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "420A8C4A-345D-F026-5AA4-FAA908BB81B5" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node12.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node12.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249836Ki"}, "addresses": [{"type": "InternalIP", "address": "172.29.80.173"}, {"type": "Hostname", "address": "sp-os-node12.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "bcc41e59-2267-4fba-8999-9d983e503886", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "420A8C4A-345D-F026-5AA4-FAA908BB81B5", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147436Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "conditions": [{"status": "False", "lastTransitionTime": "2019-01-09T07:15:08Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:52:51Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2019-01-09T07:15:08Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:52:51Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T09:59:08Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:52:51Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T14:52:51Z", "reason": "KubeletNotReady", "lastHeartbeatTime": "2019-01-09T14:52:51Z", "message": "container runtime is down", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:46:57Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:52:51Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node12.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node12.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-FFM-KL75", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node12.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871802", "creationTimestamp": "2018-07-18T14:21:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node12.os.ad.scanplus.de", "uid": "cf65ab9b-8a95-11e8-a1e7-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for node to be ready (35 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node12.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node12.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-07-18T14:21:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node12.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "even", "zone": "RZ-FFM-KL75" }, "name": "sp-os-node12.os.ad.scanplus.de", "resourceVersion": "93871802", "selfLink": "/api/v1/nodes/sp-os-node12.os.ad.scanplus.de", "uid": "cf65ab9b-8a95-11e8-a1e7-005056aa3492" }, "spec": { "externalID": "sp-os-node12.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.29.80.173", "type": "InternalIP" }, { "address": "sp-os-node12.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147436Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249836Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:52:51Z", "lastTransitionTime": "2019-01-09T07:15:08Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:52:51Z", "lastTransitionTime": "2019-01-09T07:15:08Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:51Z", "lastTransitionTime": "2019-01-09T09:59:08Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:52:51Z", "lastTransitionTime": "2019-01-09T14:52:51Z", "message": "container runtime is down", "reason": "KubeletNotReady", "status": "False", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:52:51Z", "lastTransitionTime": "2018-09-13T22:46:57Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "bcc41e59-2267-4fba-8999-9d983e503886", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "420A8C4A-345D-F026-5AA4-FAA908BB81B5" } } } ], "returncode": 0 }, "retries": 37, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "node", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "default", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sp-os-node12.os.ad.scanplus.de"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get node sp-os-node12.os.ad.scanplus.de -o json -n default", "results": [{"status": {"capacity": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16249836Ki"}, "addresses": [{"type": "InternalIP", "address": "172.29.80.173"}, {"type": "Hostname", "address": "sp-os-node12.os.ad.scanplus.de"}], "nodeInfo": {"kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeletVersion": "v1.10.0+b81c8f8", "containerRuntimeVersion": "docker://1.13.1", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "kubeProxyVersion": "v1.10.0+b81c8f8", "bootID": "bcc41e59-2267-4fba-8999-9d983e503886", "osImage": "Unknown", "architecture": "amd64", "systemUUID": "420A8C4A-345D-F026-5AA4-FAA908BB81B5", "operatingSystem": "linux"}, "allocatable": {"hugepages-1Gi": "0", "hugepages-2Mi": "0", "pods": "250", "cpu": "8", "memory": "16147436Ki"}, "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"sizeBytes": 1863488428, "names": ["docker-registry.default.svc:5000/automation-prodtest/networkapi@sha256:09dd40d596d5c86462efabafb37487369280fccebd6ce221ec00c710e2543554"]}, {"sizeBytes": 1863487627, "names": ["docker-registry.default.svc:5000/automation-prod/networkapi@sha256:939250e885495554aa60d095d25b69ddf94327cb99ed0664e9226f8eab700ac8", "docker-registry.default.svc:5000/automation-prod/networkapi:latest"]}, {"sizeBytes": 1862980941, "names": ["docker-registry.default.svc:5000/automation-prod/autopython35_networkapi@sha256:a82e28b008783a8e8ff469335f24e5f93b1a680c764f7f162fce1c4bee48e5b9"]}, {"sizeBytes": 1838632466, "names": ["docker-registry.default.svc:5000/automation-gleim/autopython35_networkapi@sha256:28afbc327a131547185e73fc2e818c6829315a098b51a92d23e7aa61c5161307", "docker-registry.default.svc:5000/automation-gleim/autopython35_networkapi:latest"]}, {"sizeBytes": 1838027836, "names": ["docker-registry.default.svc:5000/automation-ziesel/networkapi@sha256:02d6cc2f3fda077caf0b8f7a9a6ed5b3b6d741aed903d362b9f4196ef1b4fefd"]}, {"sizeBytes": 1318074697, "names": ["docker-registry.default.svc:5000/automation-qa-service-definitions-blu/aidabluworkflows@sha256:f2823d66f8bfbcdc20f86d46f6d10339ae23b1c3f0b5f7d84da85dbe82997c7f"]}, {"sizeBytes": 1268901980, "names": ["registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10"]}, {"sizeBytes": 1238893950, "names": ["docker-registry.default.svc:5000/automation-maier-blu/aidabluworkflows@sha256:2a71d103bcdc159f16a0166f1d81f45507fd1716f20872bc0657b4af53e5377b", "docker-registry.default.svc:5000/automation-maier-blu/aidabluworkflows:latest"]}, {"sizeBytes": 1237078155, "names": ["docker-registry.default.svc:5000/aidablu-qa/aidabluworkflows@sha256:4f00ff28a542d825ebea14278fbee18453b0aa40c390e4182f26269818c3ca29"]}, {"sizeBytes": 1196488450, "names": ["registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0"]}, {"sizeBytes": 1169123707, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:ae941eda7d7033e89cdea7f94610eb6148207d8a6168208bb6ae253ca6659d89"]}, {"sizeBytes": 1168898500, "names": ["registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11"]}, {"sizeBytes": 1168831034, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:cd02dd0022aa91941dba88f0ccc5ddccf54c99756235f478495039b77c23d8cc", "docker-registry.default.svc:5000/automation-schoenthaler/networkapi:latest"]}, {"sizeBytes": 1168831034, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:3591fcdb645cec07f197df6fb74aab7eb867a6882f8318c9fd2de3fed057b9ba"]}, {"sizeBytes": 1168830388, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:f11827b5616deb91adc568a71de9778da82f2d7090d3676c76b39a25742a22dc"]}, {"sizeBytes": 1168826521, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:6d8eac63c830cf6ec56ecebacc8196f102e7d3a604643a87d43d63977066ca8f"]}, {"sizeBytes": 1168826497, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:1a006d5012e560f81332f7a4ac71f1027dba197605d58c7f841b5f1697bd3ceb"]}, {"sizeBytes": 1168826465, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:f10ad0d1a51380991b58641dfc4b475d6d67a78a7f8cd9e345087c501eec6e6d"]}, {"sizeBytes": 1168826440, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:6e744b9b4440a86709f941944d0e4e3c8fd70e5a39bbd51c9994f772b166d894"]}, {"sizeBytes": 1168826440, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:04a5d627f9dfb6260470f43f77f66ea2b663b04a87c56fb7f673e5d88eba2823"]}, {"sizeBytes": 1066868142, "names": ["docker-registry.default.svc:5000/automation-puscasu/networkapi@sha256:727857b846d474da2991a7652ca90d5ae50bffe416e36c136701dcba677ac09b"]}, {"sizeBytes": 971357492, "names": ["docker-registry.default.svc:5000/automation-rick/taggingclient@sha256:e9e0b0a260d497e27ee6050dfb8c7d98425aa55f193887e4c70f33943b2937dc"]}, {"sizeBytes": 970911308, "names": ["docker-registry.default.svc:5000/automation-maier/autopython35_taggingclient@sha256:0f07cf8023171ff527cb020be31e52bf8909d1d065b76ca015ee4ddb96132818"]}, {"sizeBytes": 932351584, "names": ["docker-registry.default.svc:5000/automation-prodtest/automationapi@sha256:bc512b12e2a9f1b42d1acf93538ef549ee5929378cfad37842d97f30e7abff14"]}, {"sizeBytes": 881879542, "names": ["docker-registry.default.svc:5000/automation-maier/automationapi@sha256:ba24f420f043ec543b90798b0bdbdc7525e273bea6e17d0503be66c402a38f85"]}, {"sizeBytes": 881870541, "names": ["docker-registry.default.svc:5000/automation-haertenstein/automationapi@sha256:49885c95984f8a03f0cb024ae7f15818cce7a7167f1c43039bbf65cf1af0c01d", "docker-registry.default.svc:5000/automation-haertenstein/automationapi:latest"]}, {"sizeBytes": 877838781, "names": ["docker-registry.default.svc:5000/automation-prodtest/aciapi@sha256:5592eb4f798092f2b8d90b4c2090a06f7c3118867bb3772e3cf14dc87311daef", "docker-registry.default.svc:5000/automation-prodtest/aciapi:latest"]}, {"sizeBytes": 877838023, "names": ["docker-registry.default.svc:5000/automation-prod/aciapi@sha256:128588d042f417f37abf945d2ad32be0798d7c7d75305230d3260e26ba4480c8", "docker-registry.default.svc:5000/automation-prod/aciapi:latest"]}, {"sizeBytes": 877801680, "names": ["docker-registry.default.svc:5000/automation-maier/vcenterfileclient@sha256:6cf4b3beae5167d32e2dd1cc0d0efe63eef179422fb7bd756f56be93cfb21cd1"]}, {"sizeBytes": 877795112, "names": ["docker-registry.default.svc:5000/automation-prodtest/vcenterfileclient@sha256:f436f8c61843da62e6a657e6136343d0dec4d14cfd6c900745b0438d73bf58b3", "docker-registry.default.svc:5000/automation-prodtest/vcenterfileclient:latest"]}, {"sizeBytes": 877716514, "names": ["docker-registry.default.svc:5000/automation-maier/autopython35@sha256:0c71d5251991e6fde82e4fe9d2413ff2f25d092eb2f7aa968f9294dc593d4bcd"]}, {"sizeBytes": 877707459, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35@sha256:3380ef10cb9e277b8a66f83e7562b5c8ff018fb6e0767a86959caf19bb3b961a"]}, {"sizeBytes": 877705841, "names": ["docker-registry.default.svc:5000/automation-prodtest/autopython35@sha256:acc862ede3fd33ee14d159244406554f96e1e7c5a4aade11c48b3934e49b8a78"]}, {"sizeBytes": 877705134, "names": ["docker-registry.default.svc:5000/automation-gleim/autopython35@sha256:e1bd9cf42349635d48a0b536f90d2159f51aece7fbc5ea68a398d49df5a14051"]}, {"sizeBytes": 877705083, "names": ["docker-registry.default.svc:5000/automation-prod/autopython35@sha256:976d3441e49ff4ec24df483813259aae8024fbba5c7c2826fe0db0a50caaa443"]}, {"sizeBytes": 877221851, "names": ["docker-registry.default.svc:5000/automation-rick/aciapi@sha256:09d033d671fc4609effc05f928a8d474a838f7a35c1f51a2c924e1945101d310"]}, {"sizeBytes": 877089035, "names": ["docker-registry.default.svc:5000/automation-rick/autopython35@sha256:1cb098cb5dabb124b9c6790178a4d78916c55a5fba3fef20d0b0c13f05fecdb3"]}, {"sizeBytes": 877041104, "names": ["docker-registry.default.svc:5000/automation-puscasu/ftpclient@sha256:74e7aab1964000a7df3260ac85e1a00a673a1745236913d29cecb67dfb35b2e3"]}, {"sizeBytes": 876740493, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35@sha256:4d6eadbef81eb8ba3c3da438edae5a6a00d0ce61c1667c6e76e5c007eec29b68"]}, {"sizeBytes": 873684372, "names": ["docker-registry.default.svc:5000/aida-portal-prod/aida-portal@sha256:1e78a4da50ff0c9e38132b1bc663d70cfac88931d034d9ccc59590c315b404a0"]}, {"sizeBytes": 868745300, "names": ["docker-registry.default.svc:5000/automation-qa-service-definitions-aida/aida-portal@sha256:8bc738087467e4d240790f0b5389ab3080a11105c5c6eed2cbe6529d58d967f2"]}, {"sizeBytes": 822822137, "names": ["docker-registry.default.svc:5000/automation-aida-qa-managed-connectivity/aida-portal@sha256:2cbe8dd17edf38907ba8ceadeb9324e87df58e94c2ee61d3cd90d76103a44ff7"]}, {"sizeBytes": 814403829, "names": ["docker-registry.default.svc:5000/automation-basisprod/autopython35_networkapi@sha256:a5f1a183bdb91fc5d5bd9104b2bccf9489edb0f363cf5e340405014415ec962d", "docker-registry.default.svc:5000/automation-develop/autopython35_networkapi@sha256:a5f1a183bdb91fc5d5bd9104b2bccf9489edb0f363cf5e340405014415ec962d", "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi@sha256:a5f1a183bdb91fc5d5bd9104b2bccf9489edb0f363cf5e340405014415ec962d"]}, {"sizeBytes": 813911535, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe", "docker-registry.default.svc:5000/automation-prod/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe"]}, {"sizeBytes": 810375991, "names": ["docker-registry.default.svc:5000/automation-haertenstein/autopython35_sshclient@sha256:8bac746af9fd859d4ac8a47b58fcbc6a4a614e420d8b28b3d5984378cdc932af", "docker-registry.default.svc:5000/automation-prod/autopython35_sshclient@sha256:8bac746af9fd859d4ac8a47b58fcbc6a4a614e420d8b28b3d5984378cdc932af"]}, {"sizeBytes": 801131361, "names": ["docker-registry.default.svc:5000/automation-qa/autopython35_networkapi@sha256:937900e66678ecbdd4c2a67aebb2dbe51f8f45173f9c0bd7d048f66555cd05e7", "docker-registry.default.svc:5000/automation-rick/autopython35_networkapi@sha256:937900e66678ecbdd4c2a67aebb2dbe51f8f45173f9c0bd7d048f66555cd05e7"]}, {"sizeBytes": 747199209, "names": ["docker-registry.default.svc:5000/automation-ziesel/autopython35_networkapi@sha256:46ee743457afe8a66fccf16dc7d117d11e6dd00a72e5176fb876f0ff70ff2999"]}, {"sizeBytes": 739443496, "names": ["docker-registry.default.svc:5000/automation-schoenthaler/autopython35_networkapi@sha256:3f6578aa5338e03926a6dd7245a2762c0a2f08b2013859110325e8e7a7ea9d73"]}, {"sizeBytes": 700055717, "names": ["docker-registry.default.svc:5000/aidablu-qa/aida-blu@sha256:8c89650555725e2b776882c2c659292eb853fd1aca68ecb81175629a74d965b7"]}, {"sizeBytes": 699988722, "names": ["docker-registry.default.svc:5000/automation-blu-qa-managed-connectivity/aida-blu@sha256:abb455bc5bd20aedc03574eb481d4270936c26d2e80d345694fceff6117ed144"]}], "conditions": [{"status": "False", "lastTransitionTime": "2019-01-09T07:15:08Z", "reason": "KubeletHasSufficientDisk", "lastHeartbeatTime": "2019-01-09T14:53:01Z", "message": "kubelet has sufficient disk space available", "type": "OutOfDisk"}, {"status": "False", "lastTransitionTime": "2019-01-09T07:15:08Z", "reason": "KubeletHasSufficientMemory", "lastHeartbeatTime": "2019-01-09T14:53:01Z", "message": "kubelet has sufficient memory available", "type": "MemoryPressure"}, {"status": "False", "lastTransitionTime": "2019-01-09T09:59:08Z", "reason": "KubeletHasNoDiskPressure", "lastHeartbeatTime": "2019-01-09T14:53:01Z", "message": "kubelet has no disk pressure", "type": "DiskPressure"}, {"status": "True", "lastTransitionTime": "2019-01-09T14:53:01Z", "reason": "KubeletReady", "lastHeartbeatTime": "2019-01-09T14:53:01Z", "message": "kubelet is posting ready status", "type": "Ready"}, {"status": "False", "lastTransitionTime": "2018-09-13T22:46:57Z", "reason": "KubeletHasSufficientPID", "lastHeartbeatTime": "2019-01-09T14:53:01Z", "message": "kubelet has sufficient PID available", "type": "PIDPressure"}]}, "kind": "Node", "spec": {"externalID": "sp-os-node12.os.ad.scanplus.de"}, "apiVersion": "v1", "metadata": {"name": "sp-os-node12.os.ad.scanplus.de", "labels": {"update.group": "even", "logging-infra-fluentd": "true", "zone": "RZ-FFM-KL75", "beta.kubernetes.io/os": "linux", "region": "primary", "kubernetes.io/hostname": "sp-os-node12.os.ad.scanplus.de", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "beta.kubernetes.io/arch": "amd64"}, "resourceVersion": "93871856", "creationTimestamp": "2018-07-18T14:21:06Z", "annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true", "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96"}, "selfLink": "/api/v1/nodes/sp-os-node12.os.ad.scanplus.de", "uid": "cf65ab9b-8a95-11e8-a1e7-005056aa3492"}}]}}\n', '') ok: [sp-os-node12.os.ad.scanplus.de -> sp-os-master01.os.ad.scanplus.de] => { "attempts": 3, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sp-os-node12.os.ad.scanplus.de", "namespace": "default", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get node sp-os-node12.os.ad.scanplus.de -o json -n default", "results": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "node.openshift.io/md5sum": "a19a7ff4c63df7f2f1af6c75774dfe96", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-07-18T14:21:06Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "sp-os-node12.os.ad.scanplus.de", "logging-infra-fluentd": "true", "node-role.kubernetes.io/compute": "true", "nodeusage": "prod", "region": "primary", "update.group": "even", "zone": "RZ-FFM-KL75" }, "name": "sp-os-node12.os.ad.scanplus.de", "resourceVersion": "93871856", "selfLink": "/api/v1/nodes/sp-os-node12.os.ad.scanplus.de", "uid": "cf65ab9b-8a95-11e8-a1e7-005056aa3492" }, "spec": { "externalID": "sp-os-node12.os.ad.scanplus.de" }, "status": { "addresses": [ { "address": "172.29.80.173", "type": "InternalIP" }, { "address": "sp-os-node12.os.ad.scanplus.de", "type": "Hostname" } ], "allocatable": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16147436Ki", "pods": "250" }, "capacity": { "cpu": "8", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "16249836Ki", "pods": "250" }, "conditions": [ { "lastHeartbeatTime": "2019-01-09T14:53:01Z", "lastTransitionTime": "2019-01-09T07:15:08Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2019-01-09T14:53:01Z", "lastTransitionTime": "2019-01-09T07:15:08Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2019-01-09T14:53:01Z", "lastTransitionTime": "2019-01-09T09:59:08Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2019-01-09T14:53:01Z", "lastTransitionTime": "2019-01-09T14:53:01Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" }, { "lastHeartbeatTime": "2019-01-09T14:53:01Z", "lastTransitionTime": "2018-09-13T22:46:57Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "docker-registry.default.svc:5000/automation-prodtest/networkapi@sha256:09dd40d596d5c86462efabafb37487369280fccebd6ce221ec00c710e2543554" ], "sizeBytes": 1863488428 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/networkapi@sha256:939250e885495554aa60d095d25b69ddf94327cb99ed0664e9226f8eab700ac8", "docker-registry.default.svc:5000/automation-prod/networkapi:latest" ], "sizeBytes": 1863487627 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/autopython35_networkapi@sha256:a82e28b008783a8e8ff469335f24e5f93b1a680c764f7f162fce1c4bee48e5b9" ], "sizeBytes": 1862980941 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/autopython35_networkapi@sha256:28afbc327a131547185e73fc2e818c6829315a098b51a92d23e7aa61c5161307", "docker-registry.default.svc:5000/automation-gleim/autopython35_networkapi:latest" ], "sizeBytes": 1838632466 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/networkapi@sha256:02d6cc2f3fda077caf0b8f7a9a6ed5b3b6d741aed903d362b9f4196ef1b4fefd" ], "sizeBytes": 1838027836 }, { "names": [ "docker-registry.default.svc:5000/automation-qa-service-definitions-blu/aidabluworkflows@sha256:f2823d66f8bfbcdc20f86d46f6d10339ae23b1c3f0b5f7d84da85dbe82997c7f" ], "sizeBytes": 1318074697 }, { "names": [ "registry.access.redhat.com/openshift3/ose-node@sha256:b105ec6800823dc4b1deef0b0fe0abb90afc8e530191606baba014e3d50f1daf", "registry.access.redhat.com/openshift3/ose-node:v3.10" ], "sizeBytes": 1268901980 }, { "names": [ "docker-registry.default.svc:5000/automation-maier-blu/aidabluworkflows@sha256:2a71d103bcdc159f16a0166f1d81f45507fd1716f20872bc0657b4af53e5377b", "docker-registry.default.svc:5000/automation-maier-blu/aidabluworkflows:latest" ], "sizeBytes": 1238893950 }, { "names": [ "docker-registry.default.svc:5000/aidablu-qa/aidabluworkflows@sha256:4f00ff28a542d825ebea14278fbee18453b0aa40c390e4182f26269818c3ca29" ], "sizeBytes": 1237078155 }, { "names": [ "registry.spdev.net/aidablu/mistral@sha256:25befa8a8065a9fcec17ede0be6f3c12b6de079fa36db3427b1e7d024b85921b", "registry.spdev.net/aidablu/mistral:7.0.0" ], "sizeBytes": 1196488450 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:ae941eda7d7033e89cdea7f94610eb6148207d8a6168208bb6ae253ca6659d89" ], "sizeBytes": 1169123707 }, { "names": [ "registry.redhat.io/openshift3/ose-node@sha256:fe405ec65f26cf9433be532f4d843fcb3d7eb90720993f3c31a7b6bb11d138fb", "registry.redhat.io/openshift3/ose-node:v3.11" ], "sizeBytes": 1168898500 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:cd02dd0022aa91941dba88f0ccc5ddccf54c99756235f478495039b77c23d8cc", "docker-registry.default.svc:5000/automation-schoenthaler/networkapi:latest" ], "sizeBytes": 1168831034 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:3591fcdb645cec07f197df6fb74aab7eb867a6882f8318c9fd2de3fed057b9ba" ], "sizeBytes": 1168831034 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:f11827b5616deb91adc568a71de9778da82f2d7090d3676c76b39a25742a22dc" ], "sizeBytes": 1168830388 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:6d8eac63c830cf6ec56ecebacc8196f102e7d3a604643a87d43d63977066ca8f" ], "sizeBytes": 1168826521 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:1a006d5012e560f81332f7a4ac71f1027dba197605d58c7f841b5f1697bd3ceb" ], "sizeBytes": 1168826497 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:f10ad0d1a51380991b58641dfc4b475d6d67a78a7f8cd9e345087c501eec6e6d" ], "sizeBytes": 1168826465 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:6e744b9b4440a86709f941944d0e4e3c8fd70e5a39bbd51c9994f772b166d894" ], "sizeBytes": 1168826440 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/networkapi@sha256:04a5d627f9dfb6260470f43f77f66ea2b663b04a87c56fb7f673e5d88eba2823" ], "sizeBytes": 1168826440 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/networkapi@sha256:727857b846d474da2991a7652ca90d5ae50bffe416e36c136701dcba677ac09b" ], "sizeBytes": 1066868142 }, { "names": [ "docker-registry.default.svc:5000/automation-rick/taggingclient@sha256:e9e0b0a260d497e27ee6050dfb8c7d98425aa55f193887e4c70f33943b2937dc" ], "sizeBytes": 971357492 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/autopython35_taggingclient@sha256:0f07cf8023171ff527cb020be31e52bf8909d1d065b76ca015ee4ddb96132818" ], "sizeBytes": 970911308 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/automationapi@sha256:bc512b12e2a9f1b42d1acf93538ef549ee5929378cfad37842d97f30e7abff14" ], "sizeBytes": 932351584 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/automationapi@sha256:ba24f420f043ec543b90798b0bdbdc7525e273bea6e17d0503be66c402a38f85" ], "sizeBytes": 881879542 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/automationapi@sha256:49885c95984f8a03f0cb024ae7f15818cce7a7167f1c43039bbf65cf1af0c01d", "docker-registry.default.svc:5000/automation-haertenstein/automationapi:latest" ], "sizeBytes": 881870541 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/aciapi@sha256:5592eb4f798092f2b8d90b4c2090a06f7c3118867bb3772e3cf14dc87311daef", "docker-registry.default.svc:5000/automation-prodtest/aciapi:latest" ], "sizeBytes": 877838781 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/aciapi@sha256:128588d042f417f37abf945d2ad32be0798d7c7d75305230d3260e26ba4480c8", "docker-registry.default.svc:5000/automation-prod/aciapi:latest" ], "sizeBytes": 877838023 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/vcenterfileclient@sha256:6cf4b3beae5167d32e2dd1cc0d0efe63eef179422fb7bd756f56be93cfb21cd1" ], "sizeBytes": 877801680 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/vcenterfileclient@sha256:f436f8c61843da62e6a657e6136343d0dec4d14cfd6c900745b0438d73bf58b3", "docker-registry.default.svc:5000/automation-prodtest/vcenterfileclient:latest" ], "sizeBytes": 877795112 }, { "names": [ "docker-registry.default.svc:5000/automation-maier/autopython35@sha256:0c71d5251991e6fde82e4fe9d2413ff2f25d092eb2f7aa968f9294dc593d4bcd" ], "sizeBytes": 877716514 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35@sha256:3380ef10cb9e277b8a66f83e7562b5c8ff018fb6e0767a86959caf19bb3b961a" ], "sizeBytes": 877707459 }, { "names": [ "docker-registry.default.svc:5000/automation-prodtest/autopython35@sha256:acc862ede3fd33ee14d159244406554f96e1e7c5a4aade11c48b3934e49b8a78" ], "sizeBytes": 877705841 }, { "names": [ "docker-registry.default.svc:5000/automation-gleim/autopython35@sha256:e1bd9cf42349635d48a0b536f90d2159f51aece7fbc5ea68a398d49df5a14051" ], "sizeBytes": 877705134 }, { "names": [ "docker-registry.default.svc:5000/automation-prod/autopython35@sha256:976d3441e49ff4ec24df483813259aae8024fbba5c7c2826fe0db0a50caaa443" ], "sizeBytes": 877705083 }, { "names": [ "docker-registry.default.svc:5000/automation-rick/aciapi@sha256:09d033d671fc4609effc05f928a8d474a838f7a35c1f51a2c924e1945101d310" ], "sizeBytes": 877221851 }, { "names": [ "docker-registry.default.svc:5000/automation-rick/autopython35@sha256:1cb098cb5dabb124b9c6790178a4d78916c55a5fba3fef20d0b0c13f05fecdb3" ], "sizeBytes": 877089035 }, { "names": [ "docker-registry.default.svc:5000/automation-puscasu/ftpclient@sha256:74e7aab1964000a7df3260ac85e1a00a673a1745236913d29cecb67dfb35b2e3" ], "sizeBytes": 877041104 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35@sha256:4d6eadbef81eb8ba3c3da438edae5a6a00d0ce61c1667c6e76e5c007eec29b68" ], "sizeBytes": 876740493 }, { "names": [ "docker-registry.default.svc:5000/aida-portal-prod/aida-portal@sha256:1e78a4da50ff0c9e38132b1bc663d70cfac88931d034d9ccc59590c315b404a0" ], "sizeBytes": 873684372 }, { "names": [ "docker-registry.default.svc:5000/automation-qa-service-definitions-aida/aida-portal@sha256:8bc738087467e4d240790f0b5389ab3080a11105c5c6eed2cbe6529d58d967f2" ], "sizeBytes": 868745300 }, { "names": [ "docker-registry.default.svc:5000/automation-aida-qa-managed-connectivity/aida-portal@sha256:2cbe8dd17edf38907ba8ceadeb9324e87df58e94c2ee61d3cd90d76103a44ff7" ], "sizeBytes": 822822137 }, { "names": [ "docker-registry.default.svc:5000/automation-basisprod/autopython35_networkapi@sha256:a5f1a183bdb91fc5d5bd9104b2bccf9489edb0f363cf5e340405014415ec962d", "docker-registry.default.svc:5000/automation-develop/autopython35_networkapi@sha256:a5f1a183bdb91fc5d5bd9104b2bccf9489edb0f363cf5e340405014415ec962d", "docker-registry.default.svc:5000/automation-maier/autopython35_networkapi@sha256:a5f1a183bdb91fc5d5bd9104b2bccf9489edb0f363cf5e340405014415ec962d" ], "sizeBytes": 814403829 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe", "docker-registry.default.svc:5000/automation-prod/autopython35_networkapi@sha256:8ca2153d19ad4d753a81e23d708c07fef207c2e15cb981344658b7e0c04a4afe" ], "sizeBytes": 813911535 }, { "names": [ "docker-registry.default.svc:5000/automation-haertenstein/autopython35_sshclient@sha256:8bac746af9fd859d4ac8a47b58fcbc6a4a614e420d8b28b3d5984378cdc932af", "docker-registry.default.svc:5000/automation-prod/autopython35_sshclient@sha256:8bac746af9fd859d4ac8a47b58fcbc6a4a614e420d8b28b3d5984378cdc932af" ], "sizeBytes": 810375991 }, { "names": [ "docker-registry.default.svc:5000/automation-qa/autopython35_networkapi@sha256:937900e66678ecbdd4c2a67aebb2dbe51f8f45173f9c0bd7d048f66555cd05e7", "docker-registry.default.svc:5000/automation-rick/autopython35_networkapi@sha256:937900e66678ecbdd4c2a67aebb2dbe51f8f45173f9c0bd7d048f66555cd05e7" ], "sizeBytes": 801131361 }, { "names": [ "docker-registry.default.svc:5000/automation-ziesel/autopython35_networkapi@sha256:46ee743457afe8a66fccf16dc7d117d11e6dd00a72e5176fb876f0ff70ff2999" ], "sizeBytes": 747199209 }, { "names": [ "docker-registry.default.svc:5000/automation-schoenthaler/autopython35_networkapi@sha256:3f6578aa5338e03926a6dd7245a2762c0a2f08b2013859110325e8e7a7ea9d73" ], "sizeBytes": 739443496 }, { "names": [ "docker-registry.default.svc:5000/aidablu-qa/aida-blu@sha256:8c89650555725e2b776882c2c659292eb853fd1aca68ecb81175629a74d965b7" ], "sizeBytes": 700055717 }, { "names": [ "docker-registry.default.svc:5000/automation-blu-qa-managed-connectivity/aida-blu@sha256:abb455bc5bd20aedc03574eb481d4270936c26d2e80d345694fceff6117ed144" ], "sizeBytes": 699988722 } ], "nodeInfo": { "architecture": "amd64", "bootID": "bcc41e59-2267-4fba-8999-9d983e503886", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.11.6.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "d768f1f16c8043df9d09ccf8ab47a75c", "operatingSystem": "linux", "osImage": "Unknown", "systemUUID": "420A8C4A-345D-F026-5AA4-FAA908BB81B5" } } } ], "returncode": 0 }, "state": "list" } META: ran handlers META: ran handlers PLAY [Populate config host groups] ****************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [Load group name mapping variables] ************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:7 Wednesday 09 January 2019 15:53:02 +0100 (0:00:11.398) 0:13:37.057 ***** ok: [localhost] => { "ansible_facts": { "g_all_hosts": "{{ g_master_hosts | union(g_node_hosts) | union(g_etcd_hosts) | union(g_new_etcd_hosts) | union(g_lb_hosts) | union(g_nfs_hosts) | union(g_new_node_hosts)| union(g_new_master_hosts) | default([]) }}", "g_etcd_hosts": "{{ groups.etcd | default([]) }}", "g_glusterfs_hosts": "{{ groups.glusterfs | default([]) }}", "g_glusterfs_registry_hosts": "{{ groups.glusterfs_registry | default(g_glusterfs_hosts) }}", "g_lb_hosts": "{{ groups.lb | default([]) }}", "g_master_hosts": "{{ groups.masters | default([]) }}", "g_new_etcd_hosts": "{{ groups.new_etcd | default([]) }}", "g_new_master_hosts": "{{ groups.new_masters | default([]) }}", "g_new_node_hosts": "{{ groups.new_nodes | default([]) }}", "g_nfs_hosts": "{{ groups.nfs | default([]) }}", "g_node_hosts": "{{ groups.nodes | default([]) }}" }, "ansible_included_var_files": [ "/usr/share/ansible/openshift-ansible/playbooks/init/vars/cluster_hosts.yml" ], "changed": false } TASK [Evaluate groups - g_nfs_hosts is single host] ************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:10 Wednesday 09 January 2019 15:53:02 +0100 (0:00:00.040) 0:13:37.097 ***** skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Evaluate oo_all_hosts] ************************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:15 Wednesday 09 January 2019 15:53:02 +0100 (0:00:00.032) 0:13:37.129 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-infra01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-infra01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-infra01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-infra01.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-infra02.os.ad.scanplus.de creating host via 'add_host': hostname=sp-os-node02.os.ad.scanplus.de ok: [localhost] => (item=sp-os-infra02.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-infra02.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-infra02.os.ad.scanplus.de" } ok: [localhost] => (item=sp-os-node02.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node02.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node02.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node03.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node03.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node03.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node03.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node04.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node04.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node04.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node04.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node05.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node05.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node05.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node05.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node06.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node06.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node06.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node06.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node07.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node07.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node07.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node07.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node08.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node08.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node08.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node08.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node09.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node09.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node09.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node09.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node10.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node10.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node10.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node10.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node11.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node11.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node11.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node11.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node12.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node12.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_all_hosts" ], "host_name": "sp-os-node12.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node12.os.ad.scanplus.de" } TASK [Evaluate oo_masters] ************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:24 Wednesday 09 January 2019 15:53:03 +0100 (0:00:00.170) 0:13:37.299 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_masters" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } TASK [Evaluate oo_first_master] ********************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:33 Wednesday 09 January 2019 15:53:03 +0100 (0:00:00.061) 0:13:37.361 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => { "add_host": { "groups": [ "oo_first_master" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false } TASK [Evaluate oo_new_etcd_to_config] *************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:42 Wednesday 09 January 2019 15:53:03 +0100 (0:00:00.045) 0:13:37.407 ***** TASK [Evaluate oo_masters_to_config] **************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:51 Wednesday 09 January 2019 15:53:03 +0100 (0:00:00.029) 0:13:37.436 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_masters_to_config" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } TASK [Evaluate oo_etcd_to_config] ******************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:60 Wednesday 09 January 2019 15:53:03 +0100 (0:00:00.056) 0:13:37.492 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_etcd_to_config" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } TASK [Evaluate oo_first_etcd] *********************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:69 Wednesday 09 January 2019 15:53:03 +0100 (0:00:00.060) 0:13:37.553 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => { "add_host": { "groups": [ "oo_first_etcd" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false } TASK [Evaluate oo_etcd_hosts_to_upgrade] ************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:81 Wednesday 09 January 2019 15:53:03 +0100 (0:00:00.051) 0:13:37.605 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_etcd_hosts_to_upgrade" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } TASK [Evaluate oo_etcd_hosts_to_backup] ************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:88 Wednesday 09 January 2019 15:53:03 +0100 (0:00:00.050) 0:13:37.656 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_etcd_hosts_to_backup" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } TASK [Evaluate oo_nodes_to_config] ****************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:95 Wednesday 09 January 2019 15:53:03 +0100 (0:00:00.054) 0:13:37.710 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-infra01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-infra01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-infra01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-infra01.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-infra02.os.ad.scanplus.de ok: [localhost] => (item=sp-os-infra02.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-infra02.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-infra02.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node02.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node02.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node02.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node02.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node03.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node03.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node03.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node03.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node04.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node04.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node04.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node04.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node05.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node05.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node05.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node05.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node06.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node06.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node06.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node06.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node07.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node07.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node07.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node07.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node08.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node08.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node08.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node08.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node09.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node09.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node09.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node09.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node10.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node10.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node10.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node10.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node11.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node11.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node11.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node11.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node12.os.ad.scanplus.de ok: [localhost] => (item=sp-os-node12.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_config" ], "host_name": "sp-os-node12.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node12.os.ad.scanplus.de" } TASK [Evaluate oo_lb_to_config] ********************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:104 Wednesday 09 January 2019 15:53:03 +0100 (0:00:00.144) 0:13:37.854 ***** TASK [Evaluate oo_nfs_to_config] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:113 Wednesday 09 January 2019 15:53:03 +0100 (0:00:00.027) 0:13:37.881 ***** TASK [Evaluate oo_glusterfs_to_config] ************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:122 Wednesday 09 January 2019 15:53:03 +0100 (0:00:00.030) 0:13:37.912 ***** TASK [Evaluate oo_etcd_to_migrate] ****************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/evaluate_groups.yml:131 Wednesday 09 January 2019 15:53:03 +0100 (0:00:00.031) 0:13:37.944 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [localhost] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_etcd_to_migrate" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } META: ran handlers META: ran handlers PLAY [Ensure that all non-node hosts are accessible] ************************************************************************************************************************************************************************************************************************************************************************ META: ran handlers META: ran handlers META: ran handlers PLAY [Initialize basic host facts] ****************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [openshift_sanitize_inventory : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:4 Wednesday 09 January 2019 15:53:03 +0100 (0:00:00.151) 0:13:38.095 ***** statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/__deprecations_logging.yml included: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml for sp-os-master01.os.ad.scanplus.de TASK [openshift_sanitize_inventory : Check for usage of deprecated variables] *********************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml:4 Wednesday 09 January 2019 15:53:04 +0100 (0:00:00.333) 0:13:38.429 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [openshift_sanitize_inventory : debug] ********************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml:13 Wednesday 09 January 2019 15:53:04 +0100 (0:00:00.333) 0:13:38.763 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [openshift_sanitize_inventory : set_stats] ***************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml:14 Wednesday 09 January 2019 15:53:04 +0100 (0:00:00.106) 0:13:38.870 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/__deprecations_logging.yml:10 Wednesday 09 January 2019 15:53:04 +0100 (0:00:00.113) 0:13:38.984 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_logging_elasticsearch_ops_pvc_dynamic": "", "openshift_logging_elasticsearch_ops_pvc_prefix": "", "openshift_logging_elasticsearch_ops_pvc_size": "", "openshift_logging_elasticsearch_pvc_dynamic": "", "openshift_logging_elasticsearch_pvc_prefix": "", "openshift_logging_elasticsearch_pvc_size": "" }, "changed": false } TASK [openshift_sanitize_inventory : Standardize on latest variable names] ************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:7 Wednesday 09 January 2019 15:53:04 +0100 (0:00:00.151) 0:13:39.135 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "deployment_subtype": "basic", "openshift_deployment_subtype": "basic" }, "changed": false } TASK [openshift_sanitize_inventory : Normalize openshift_release] *********************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:12 Wednesday 09 January 2019 15:53:05 +0100 (0:00:00.150) 0:13:39.286 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Abort when openshift_release is invalid] *********************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:22 Wednesday 09 January 2019 15:53:05 +0100 (0:00:00.103) 0:13:39.389 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:31 Wednesday 09 January 2019 15:53:05 +0100 (0:00:00.114) 0:13:39.504 ***** included: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml for sp-os-master01.os.ad.scanplus.de TASK [openshift_sanitize_inventory : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:5 Wednesday 09 January 2019 15:53:05 +0100 (0:00:00.195) 0:13:39.699 ***** TASK [openshift_sanitize_inventory : Ensure that dynamic provisioning is set if using dynamic storage] ********************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:12 Wednesday 09 January 2019 15:53:05 +0100 (0:00:00.206) 0:13:39.905 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] **************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:28 Wednesday 09 January 2019 15:53:05 +0100 (0:00:00.136) 0:13:40.042 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] **************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:41 Wednesday 09 January 2019 15:53:05 +0100 (0:00:00.111) 0:13:40.153 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Check for deprecated prometheus/grafana install] *************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:53 Wednesday 09 January 2019 15:53:06 +0100 (0:00:00.106) 0:13:40.260 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure clusterid is set along with the cloudprovider] ********************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:35 Wednesday 09 January 2019 15:53:06 +0100 (0:00:00.115) 0:13:40.376 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure ansible_service_broker_remove and ansible_service_broker_install are mutually exclusive] **************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:48 Wednesday 09 January 2019 15:53:06 +0100 (0:00:00.106) 0:13:40.482 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure template_service_broker_remove and template_service_broker_install are mutually exclusive] ************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:57 Wednesday 09 January 2019 15:53:06 +0100 (0:00:00.103) 0:13:40.586 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure that all requires vsphere configuration variables are set] ********************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:66 Wednesday 09 January 2019 15:53:06 +0100 (0:00:00.105) 0:13:40.691 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : ensure provider configuration variables are defined] *********************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:83 Wednesday 09 January 2019 15:53:06 +0100 (0:00:00.110) 0:13:40.801 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure removed web console extension variables are not set] **************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:98 Wednesday 09 January 2019 15:53:06 +0100 (0:00:00.112) 0:13:40.914 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : Ensure that web console port matches API server port] ********************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:109 Wednesday 09 January 2019 15:53:06 +0100 (0:00:00.103) 0:13:41.018 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_sanitize_inventory : At least one master is schedulable] **************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:119 Wednesday 09 January 2019 15:53:07 +0100 (0:00:00.227) 0:13:41.245 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Detecting Operating System from ostree_booted] ************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:19 Wednesday 09 January 2019 15:53:07 +0100 (0:00:00.218) 0:13:41.463 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/run/ostree-booted", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"exists": false}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/run/ostree-booted" } }, "stat": { "exists": false } } TASK [set openshift_deployment_type if unset] ******************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:28 Wednesday 09 January 2019 15:53:07 +0100 (0:00:00.262) 0:13:41.725 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [initialize_facts set fact openshift_is_atomic] ************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:35 Wednesday 09 January 2019 15:53:07 +0100 (0:00:00.106) 0:13:41.832 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_is_atomic": false }, "changed": false } TASK [Determine Atomic Host Docker Version] ********************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:51 Wednesday 09 January 2019 15:53:07 +0100 (0:00:00.131) 0:13:41.963 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [assert atomic host docker version is 1.12 or later] ******************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:55 Wednesday 09 January 2019 15:53:07 +0100 (0:00:00.101) 0:13:42.064 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Retrieve existing master configs and validate] ************************************************************************************************************************************************************************************************************************************************************************ META: ran handlers TASK [openshift_control_plane : stat] *************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:3 Wednesday 09 January 2019 15:53:07 +0100 (0:00:00.110) 0:13:42.175 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/etc/origin/master/master-config.yaml", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547019889.8290536, "block_size": 4096, "inode": 395378, "isgid": false, "size": 6719, "wgrp": false, "executable": false, "isuid": false, "readable": true, "isreg": true, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 16, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/etc/origin/master/master-config.yaml", "xusr": false, "atime": 1547019890.519067, "isdir": false, "ctime": 1547019889.8290536, "isblk": false, "xgrp": false, "dev": 64769, "roth": true, "isfifo": false, "mode": "0644", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/etc/origin/master/master-config.yaml" } }, "stat": { "atime": 1547019890.519067, "block_size": 4096, "blocks": 16, "ctime": 1547019889.8290536, "dev": 64769, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 395378, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1547019889.8290536, "nlink": 1, "path": "/etc/origin/master/master-config.yaml", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 6719, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [openshift_control_plane : slurp] ************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:10 Wednesday 09 January 2019 15:53:08 +0100 (0:00:00.297) 0:13:42.473 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/net_tools/basics/slurp.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"content": "admissionConfig:
  pluginConfig:
    BuildDefaults:
      configuration:
        apiVersion: v1
        env: []
        kind: BuildDefaultsConfig
        resources:
          limits: {}
          requests: {}
    BuildOverrides:
      configuration:
        apiVersion: v1
        kind: BuildOverridesConfig
    openshift.io/ImagePolicy:
      configuration:
        apiVersion: v1
        executionRules:
        - matchImageAnnotations:
          - key: images.openshift.io/deny-execution
            value: 'true'
          name: execution-denied
          onResources:
          - resource: pods
          - resource: builds
          reject: true
          skipOnResolutionFailure: true
        kind: ImagePolicyConfig
aggregatorConfig:
  proxyClientInfo:
    certFile: aggregator-front-proxy.crt
    keyFile: aggregator-front-proxy.key
apiLevels:
- v1
apiVersion: v1
authConfig:
  requestHeader:
    clientCA: front-proxy-ca.crt
    clientCommonNames:
    - aggregator-front-proxy
    extraHeaderPrefixes:
    - X-Remote-Extra-
    groupHeaders:
    - X-Remote-Group
    usernameHeaders:
    - X-Remote-User
controllerConfig:
  election:
    lockName: openshift-master-controllers
  serviceServingCert:
    signer:
      certFile: service-signer.crt
      keyFile: service-signer.key
controllers: '*'
corsAllowedOrigins:
- (?i)//127\.0\.0\.1(:|\z)
- (?i)//localhost(:|\z)
- (?i)//172\.30\.80\.240(:|\z)
- (?i)//kubernetes\.default(:|\z)
- (?i)//kubernetes\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes(:|\z)
- (?i)//openshift\.default(:|\z)
- (?i)//172\.18\.128\.1(:|\z)
- (?i)//sp\-os\-master01\.os\.ad\.scanplus\.de(:|\z)
- (?i)//openshift\.default\.svc(:|\z)
- (?i)//openshift\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes\.default\.svc(:|\z)
- (?i)//openshift(:|\z)
dnsConfig:
  bindAddress: 0.0.0.0:8053
  bindNetwork: tcp4
etcdClientInfo:
  ca: master.etcd-ca.crt
  certFile: master.etcd-client.crt
  keyFile: master.etcd-client.key
  urls:
  - https://sp-os-master01.os.ad.scanplus.de:2379
etcdStorageConfig:
  kubernetesStoragePrefix: kubernetes.io
  kubernetesStorageVersion: v1
  openShiftStoragePrefix: openshift.io
  openShiftStorageVersion: v1
imageConfig:
  format: registry.redhat.io/openshift3/ose-${component}:${version}
  latest: false
imagePolicyConfig:
  MaxScheduledImageImportsPerMinute: 10
  ScheduledImageImportMinimumIntervalSeconds: 1800
  disableScheduledImport: false
  internalRegistryHostname: docker-registry.default.svc:5000
  maxImagesBulkImportedPerRepository: 3
kind: MasterConfig
kubeletClientInfo:
  ca: ca-bundle.crt
  certFile: master.kubelet-client.crt
  keyFile: master.kubelet-client.key
  port: 10250
kubernetesMasterConfig:
  apiServerArguments:
    runtime-config: []
    storage-backend:
    - etcd3
    storage-media-type:
    - application/vnd.kubernetes.protobuf
  controllerArguments:
    cluster-signing-cert-file:
    - /etc/origin/master/ca.crt
    cluster-signing-key-file:
    - /etc/origin/master/ca.key
    pv-recycler-pod-template-filepath-hostpath:
    - /etc/origin/master/recycler_pod.yaml
    pv-recycler-pod-template-filepath-nfs:
    - /etc/origin/master/recycler_pod.yaml
  masterCount: 1
  masterIP: 172.30.80.240
  podEvictionTimeout: null
  proxyClientInfo:
    certFile: master.proxy-client.crt
    keyFile: master.proxy-client.key
  schedulerArguments: null
  schedulerConfigFile: /etc/origin/master/scheduler.json
  servicesNodePortRange: ''
  servicesSubnet: 172.18.128.0/17
  staticNodeNames: []
masterClients:
  externalKubernetesClientConnectionOverrides:
    acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
    burst: 400
    contentType: application/vnd.kubernetes.protobuf
    qps: 200
  externalKubernetesKubeConfig: ''
  openshiftLoopbackClientConnectionOverrides:
    acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
    burst: 600
    contentType: application/vnd.kubernetes.protobuf
    qps: 300
  openshiftLoopbackKubeConfig: openshift-master.kubeconfig
masterPublicURL: https://os.ad.scanplus.de:8443
networkConfig:
  clusterNetworks:
  - cidr: 172.18.0.0/17
    hostSubnetLength: 9
  externalIPNetworkCIDRs:
  - 0.0.0.0/0
  networkPluginName: redhat/openshift-ovs-multitenant
  serviceNetworkCIDR: 172.18.128.0/17
oauthConfig:
  assetPublicURL: https://os.ad.scanplus.de:8443/console/
  grantConfig:
    method: auto
  identityProviders:
  - challenge: true
    login: true
    name: RH_IPA_LDAP_Auth
    provider:
      apiVersion: v1
      attributes:
        email:
        - mail
        id:
        - sAMAccountName
        name:
        - cn
        preferredUsername:
        - sAMAccountName
      bindDN: CN=osLdapReader,OU=Openshift,OU=ServiceUsers,OU=ScanPlus,DC=ad,DC=scanplus,DC=de
      bindPassword: 3UAL.dMJI4!b
      insecure: true
      kind: LDAPPasswordIdentityProvider
      url: ldap://SP-DC01.ad.scanplus.de/OU=ScanPlus,DC=ad,DC=scanplus,DC=de?sAMAccountName?sub?(memberOf=cn=OpenshiftUsers,OU=Openshift,OU=Groups,OU=ScanPlus,DC=ad,DC=scanplus,DC=de)
  masterCA: ca-bundle.crt
  masterPublicURL: https://os.ad.scanplus.de:8443
  masterURL: https://sp-os-master01.os.ad.scanplus.de:8443
  servingInfo:
    namedCertificates:
    - certFile: /etc/origin/master/named_certificates/cert.crt
      keyFile: /etc/origin/master/named_certificates/cert.key
      names:
      - os.ad.scanplus.de
  sessionConfig:
    sessionMaxAgeSeconds: 3600
    sessionName: ssn
    sessionSecretsFile: /etc/origin/master/session-secrets.yaml
  tokenConfig:
    accessTokenMaxAgeSeconds: 86400
    authorizeTokenMaxAgeSeconds: 500
pauseControllers: false
policyConfig:
  bootstrapPolicyFile: /etc/origin/master/policy.json
  openshiftInfrastructureNamespace: openshift-infra
  openshiftSharedResourcesNamespace: openshift
projectConfig:
  defaultNodeSelector: nodeusage=dev
  projectRequestMessage: ''
  projectRequestTemplate: ''
  securityAllocator:
    mcsAllocatorRange: s0:/2
    mcsLabelsPerProject: 5
    uidAllocatorRange: 1000000000-1999999999/10000
routingConfig:
  subdomain: apps.os.ad.scanplus.de
serviceAccountConfig:
  limitSecretReferences: false
  managedNames:
  - default
  - builder
  - deployer
  masterCA: ca-bundle.crt
  privateKeyFile: serviceaccounts.private.key
  publicKeyFiles:
  - serviceaccounts.public.key
servingInfo:
  bindAddress: 0.0.0.0:8443
  bindNetwork: tcp4
  certFile: master.server.crt
  clientCA: ca.crt
  keyFile: master.server.key
  maxRequestsInFlight: 500
  namedCertificates:
  - certFile: /etc/origin/master/named_certificates/cert.crt
    keyFile: /etc/origin/master/named_certificates/cert.key
    names:
    - os.ad.scanplus.de
  requestTimeoutSeconds: 3600
volumeConfig:
  dynamicProvisioningEnabled: true
", "source": "/etc/origin/master/master-config.yaml", "encoding": "base64", "invocation": {"module_args": {"src": "/etc/origin/master/master-config.yaml"}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "content": "admissionConfig:
  pluginConfig:
    BuildDefaults:
      configuration:
        apiVersion: v1
        env: []
        kind: BuildDefaultsConfig
        resources:
          limits: {}
          requests: {}
    BuildOverrides:
      configuration:
        apiVersion: v1
        kind: BuildOverridesConfig
    openshift.io/ImagePolicy:
      configuration:
        apiVersion: v1
        executionRules:
        - matchImageAnnotations:
          - key: images.openshift.io/deny-execution
            value: 'true'
          name: execution-denied
          onResources:
          - resource: pods
          - resource: builds
          reject: true
          skipOnResolutionFailure: true
        kind: ImagePolicyConfig
aggregatorConfig:
  proxyClientInfo:
    certFile: aggregator-front-proxy.crt
    keyFile: aggregator-front-proxy.key
apiLevels:
- v1
apiVersion: v1
authConfig:
  requestHeader:
    clientCA: front-proxy-ca.crt
    clientCommonNames:
    - aggregator-front-proxy
    extraHeaderPrefixes:
    - X-Remote-Extra-
    groupHeaders:
    - X-Remote-Group
    usernameHeaders:
    - X-Remote-User
controllerConfig:
  election:
    lockName: openshift-master-controllers
  serviceServingCert:
    signer:
      certFile: service-signer.crt
      keyFile: service-signer.key
controllers: '*'
corsAllowedOrigins:
- (?i)//127\.0\.0\.1(:|\z)
- (?i)//localhost(:|\z)
- (?i)//172\.30\.80\.240(:|\z)
- (?i)//kubernetes\.default(:|\z)
- (?i)//kubernetes\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes(:|\z)
- (?i)//openshift\.default(:|\z)
- (?i)//172\.18\.128\.1(:|\z)
- (?i)//sp\-os\-master01\.os\.ad\.scanplus\.de(:|\z)
- (?i)//openshift\.default\.svc(:|\z)
- (?i)//openshift\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes\.default\.svc(:|\z)
- (?i)//openshift(:|\z)
dnsConfig:
  bindAddress: 0.0.0.0:8053
  bindNetwork: tcp4
etcdClientInfo:
  ca: master.etcd-ca.crt
  certFile: master.etcd-client.crt
  keyFile: master.etcd-client.key
  urls:
  - https://sp-os-master01.os.ad.scanplus.de:2379
etcdStorageConfig:
  kubernetesStoragePrefix: kubernetes.io
  kubernetesStorageVersion: v1
  openShiftStoragePrefix: openshift.io
  openShiftStorageVersion: v1
imageConfig:
  format: registry.redhat.io/openshift3/ose-${component}:${version}
  latest: false
imagePolicyConfig:
  MaxScheduledImageImportsPerMinute: 10
  ScheduledImageImportMinimumIntervalSeconds: 1800
  disableScheduledImport: false
  internalRegistryHostname: docker-registry.default.svc:5000
  maxImagesBulkImportedPerRepository: 3
kind: MasterConfig
kubeletClientInfo:
  ca: ca-bundle.crt
  certFile: master.kubelet-client.crt
  keyFile: master.kubelet-client.key
  port: 10250
kubernetesMasterConfig:
  apiServerArguments:
    runtime-config: []
    storage-backend:
    - etcd3
    storage-media-type:
    - application/vnd.kubernetes.protobuf
  controllerArguments:
    cluster-signing-cert-file:
    - /etc/origin/master/ca.crt
    cluster-signing-key-file:
    - /etc/origin/master/ca.key
    pv-recycler-pod-template-filepath-hostpath:
    - /etc/origin/master/recycler_pod.yaml
    pv-recycler-pod-template-filepath-nfs:
    - /etc/origin/master/recycler_pod.yaml
  masterCount: 1
  masterIP: 172.30.80.240
  podEvictionTimeout: null
  proxyClientInfo:
    certFile: master.proxy-client.crt
    keyFile: master.proxy-client.key
  schedulerArguments: null
  schedulerConfigFile: /etc/origin/master/scheduler.json
  servicesNodePortRange: ''
  servicesSubnet: 172.18.128.0/17
  staticNodeNames: []
masterClients:
  externalKubernetesClientConnectionOverrides:
    acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
    burst: 400
    contentType: application/vnd.kubernetes.protobuf
    qps: 200
  externalKubernetesKubeConfig: ''
  openshiftLoopbackClientConnectionOverrides:
    acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
    burst: 600
    contentType: application/vnd.kubernetes.protobuf
    qps: 300
  openshiftLoopbackKubeConfig: openshift-master.kubeconfig
masterPublicURL: https://os.ad.scanplus.de:8443
networkConfig:
  clusterNetworks:
  - cidr: 172.18.0.0/17
    hostSubnetLength: 9
  externalIPNetworkCIDRs:
  - 0.0.0.0/0
  networkPluginName: redhat/openshift-ovs-multitenant
  serviceNetworkCIDR: 172.18.128.0/17
oauthConfig:
  assetPublicURL: https://os.ad.scanplus.de:8443/console/
  grantConfig:
    method: auto
  identityProviders:
  - challenge: true
    login: true
    name: RH_IPA_LDAP_Auth
    provider:
      apiVersion: v1
      attributes:
        email:
        - mail
        id:
        - sAMAccountName
        name:
        - cn
        preferredUsername:
        - sAMAccountName
      bindDN: CN=osLdapReader,OU=Openshift,OU=ServiceUsers,OU=ScanPlus,DC=ad,DC=scanplus,DC=de
      bindPassword: 3UAL.dMJI4!b
      insecure: true
      kind: LDAPPasswordIdentityProvider
      url: ldap://SP-DC01.ad.scanplus.de/OU=ScanPlus,DC=ad,DC=scanplus,DC=de?sAMAccountName?sub?(memberOf=cn=OpenshiftUsers,OU=Openshift,OU=Groups,OU=ScanPlus,DC=ad,DC=scanplus,DC=de)
  masterCA: ca-bundle.crt
  masterPublicURL: https://os.ad.scanplus.de:8443
  masterURL: https://sp-os-master01.os.ad.scanplus.de:8443
  servingInfo:
    namedCertificates:
    - certFile: /etc/origin/master/named_certificates/cert.crt
      keyFile: /etc/origin/master/named_certificates/cert.key
      names:
      - os.ad.scanplus.de
  sessionConfig:
    sessionMaxAgeSeconds: 3600
    sessionName: ssn
    sessionSecretsFile: /etc/origin/master/session-secrets.yaml
  tokenConfig:
    accessTokenMaxAgeSeconds: 86400
    authorizeTokenMaxAgeSeconds: 500
pauseControllers: false
policyConfig:
  bootstrapPolicyFile: /etc/origin/master/policy.json
  openshiftInfrastructureNamespace: openshift-infra
  openshiftSharedResourcesNamespace: openshift
projectConfig:
  defaultNodeSelector: nodeusage=dev
  projectRequestMessage: ''
  projectRequestTemplate: ''
  securityAllocator:
    mcsAllocatorRange: s0:/2
    mcsLabelsPerProject: 5
    uidAllocatorRange: 1000000000-1999999999/10000
routingConfig:
  subdomain: apps.os.ad.scanplus.de
serviceAccountConfig:
  limitSecretReferences: false
  managedNames:
  - default
  - builder
  - deployer
  masterCA: ca-bundle.crt
  privateKeyFile: serviceaccounts.private.key
  publicKeyFiles:
  - serviceaccounts.public.key
servingInfo:
  bindAddress: 0.0.0.0:8443
  bindNetwork: tcp4
  certFile: master.server.crt
  clientCA: ca.crt
  keyFile: master.server.key
  maxRequestsInFlight: 500
  namedCertificates:
  - certFile: /etc/origin/master/named_certificates/cert.crt
    keyFile: /etc/origin/master/named_certificates/cert.key
    names:
    - os.ad.scanplus.de
  requestTimeoutSeconds: 3600
volumeConfig:
  dynamicProvisioningEnabled: true
", "encoding": "base64", "invocation": { "module_args": { "src": "/etc/origin/master/master-config.yaml" } }, "source": "/etc/origin/master/master-config.yaml" } TASK [openshift_control_plane : set_fact] *********************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:17 Wednesday 09 January 2019 15:53:08 +0100 (0:00:00.271) 0:13:42.744 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "l_existing_config_master_config": { "admissionConfig": { "pluginConfig": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } }, "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } }, "openshift.io/ImagePolicy": { "configuration": { "apiVersion": "v1", "executionRules": [ { "matchImageAnnotations": [ { "key": "images.openshift.io/deny-execution", "value": "true" } ], "name": "execution-denied", "onResources": [ { "resource": "pods" }, { "resource": "builds" } ], "reject": true, "skipOnResolutionFailure": true } ], "kind": "ImagePolicyConfig" } } } }, "aggregatorConfig": { "proxyClientInfo": { "certFile": "aggregator-front-proxy.crt", "keyFile": "aggregator-front-proxy.key" } }, "apiLevels": [ "v1" ], "apiVersion": "v1", "authConfig": { "requestHeader": { "clientCA": "front-proxy-ca.crt", "clientCommonNames": [ "aggregator-front-proxy" ], "extraHeaderPrefixes": [ "X-Remote-Extra-" ], "groupHeaders": [ "X-Remote-Group" ], "usernameHeaders": [ "X-Remote-User" ] } }, "controllerConfig": { "election": { "lockName": "openshift-master-controllers" }, "serviceServingCert": { "signer": { "certFile": "service-signer.crt", "keyFile": "service-signer.key" } } }, "controllers": "*", "corsAllowedOrigins": [ "(?i)//127\\.0\\.0\\.1(:|\\z)", "(?i)//localhost(:|\\z)", "(?i)//172\\.30\\.80\\.240(:|\\z)", "(?i)//kubernetes\\.default(:|\\z)", "(?i)//kubernetes\\.default\\.svc\\.cluster\\.local(:|\\z)", "(?i)//kubernetes(:|\\z)", "(?i)//openshift\\.default(:|\\z)", "(?i)//172\\.18\\.128\\.1(:|\\z)", "(?i)//sp\\-os\\-master01\\.os\\.ad\\.scanplus\\.de(:|\\z)", "(?i)//openshift\\.default\\.svc(:|\\z)", "(?i)//openshift\\.default\\.svc\\.cluster\\.local(:|\\z)", "(?i)//kubernetes\\.default\\.svc(:|\\z)", "(?i)//openshift(:|\\z)" ], "dnsConfig": { "bindAddress": "0.0.0.0:8053", "bindNetwork": "tcp4" }, "etcdClientInfo": { "ca": "master.etcd-ca.crt", "certFile": "master.etcd-client.crt", "keyFile": "master.etcd-client.key", "urls": [ "https://sp-os-master01.os.ad.scanplus.de:2379" ] }, "etcdStorageConfig": { "kubernetesStoragePrefix": "kubernetes.io", "kubernetesStorageVersion": "v1", "openShiftStoragePrefix": "openshift.io", "openShiftStorageVersion": "v1" }, "imageConfig": { "format": "registry.redhat.io/openshift3/ose-${component}:${version}", "latest": false }, "imagePolicyConfig": { "MaxScheduledImageImportsPerMinute": 10, "ScheduledImageImportMinimumIntervalSeconds": 1800, "disableScheduledImport": false, "internalRegistryHostname": "docker-registry.default.svc:5000", "maxImagesBulkImportedPerRepository": 3 }, "kind": "MasterConfig", "kubeletClientInfo": { "ca": "ca-bundle.crt", "certFile": "master.kubelet-client.crt", "keyFile": "master.kubelet-client.key", "port": 10250 }, "kubernetesMasterConfig": { "apiServerArguments": { "runtime-config": [], "storage-backend": [ "etcd3" ], "storage-media-type": [ "application/vnd.kubernetes.protobuf" ] }, "controllerArguments": { "cluster-signing-cert-file": [ "/etc/origin/master/ca.crt" ], "cluster-signing-key-file": [ "/etc/origin/master/ca.key" ], "pv-recycler-pod-template-filepath-hostpath": [ "/etc/origin/master/recycler_pod.yaml" ], "pv-recycler-pod-template-filepath-nfs": [ "/etc/origin/master/recycler_pod.yaml" ] }, "masterCount": 1, "masterIP": "172.30.80.240", "podEvictionTimeout": null, "proxyClientInfo": { "certFile": "master.proxy-client.crt", "keyFile": "master.proxy-client.key" }, "schedulerArguments": null, "schedulerConfigFile": "/etc/origin/master/scheduler.json", "servicesNodePortRange": "", "servicesSubnet": "172.18.128.0/17", "staticNodeNames": [] }, "masterClients": { "externalKubernetesClientConnectionOverrides": { "acceptContentTypes": "application/vnd.kubernetes.protobuf,application/json", "burst": 400, "contentType": "application/vnd.kubernetes.protobuf", "qps": 200 }, "externalKubernetesKubeConfig": "", "openshiftLoopbackClientConnectionOverrides": { "acceptContentTypes": "application/vnd.kubernetes.protobuf,application/json", "burst": 600, "contentType": "application/vnd.kubernetes.protobuf", "qps": 300 }, "openshiftLoopbackKubeConfig": "openshift-master.kubeconfig" }, "masterPublicURL": "https://os.ad.scanplus.de:8443", "networkConfig": { "clusterNetworks": [ { "cidr": "172.18.0.0/17", "hostSubnetLength": 9 } ], "externalIPNetworkCIDRs": [ "0.0.0.0/0" ], "networkPluginName": "redhat/openshift-ovs-multitenant", "serviceNetworkCIDR": "172.18.128.0/17" }, "oauthConfig": { "assetPublicURL": "https://os.ad.scanplus.de:8443/console/", "grantConfig": { "method": "auto" }, "identityProviders": [ { "challenge": true, "login": true, "name": "RH_IPA_LDAP_Auth", "provider": { "apiVersion": "v1", "attributes": { "email": [ "mail" ], "id": [ "sAMAccountName" ], "name": [ "cn" ], "preferredUsername": [ "sAMAccountName" ] }, "bindDN": "CN=osLdapReader,OU=Openshift,OU=ServiceUsers,OU=ScanPlus,DC=ad,DC=scanplus,DC=de", "bindPassword": "3UAL.dMJI4!b", "insecure": true, "kind": "LDAPPasswordIdentityProvider", "url": "ldap://SP-DC01.ad.scanplus.de/OU=ScanPlus,DC=ad,DC=scanplus,DC=de?sAMAccountName?sub?(memberOf=cn=OpenshiftUsers,OU=Openshift,OU=Groups,OU=ScanPlus,DC=ad,DC=scanplus,DC=de)" } } ], "masterCA": "ca-bundle.crt", "masterPublicURL": "https://os.ad.scanplus.de:8443", "masterURL": "https://sp-os-master01.os.ad.scanplus.de:8443", "servingInfo": { "namedCertificates": [ { "certFile": "/etc/origin/master/named_certificates/cert.crt", "keyFile": "/etc/origin/master/named_certificates/cert.key", "names": [ "os.ad.scanplus.de" ] } ] }, "sessionConfig": { "sessionMaxAgeSeconds": 3600, "sessionName": "ssn", "sessionSecretsFile": "/etc/origin/master/session-secrets.yaml" }, "tokenConfig": { "accessTokenMaxAgeSeconds": 86400, "authorizeTokenMaxAgeSeconds": 500 } }, "pauseControllers": false, "policyConfig": { "bootstrapPolicyFile": "/etc/origin/master/policy.json", "openshiftInfrastructureNamespace": "openshift-infra", "openshiftSharedResourcesNamespace": "openshift" }, "projectConfig": { "defaultNodeSelector": "nodeusage=dev", "projectRequestMessage": "", "projectRequestTemplate": "", "securityAllocator": { "mcsAllocatorRange": "s0:/2", "mcsLabelsPerProject": 5, "uidAllocatorRange": "1000000000-1999999999/10000" } }, "routingConfig": { "subdomain": "apps.os.ad.scanplus.de" }, "serviceAccountConfig": { "limitSecretReferences": false, "managedNames": [ "default", "builder", "deployer" ], "masterCA": "ca-bundle.crt", "privateKeyFile": "serviceaccounts.private.key", "publicKeyFiles": [ "serviceaccounts.public.key" ] }, "servingInfo": { "bindAddress": "0.0.0.0:8443", "bindNetwork": "tcp4", "certFile": "master.server.crt", "clientCA": "ca.crt", "keyFile": "master.server.key", "maxRequestsInFlight": 500, "namedCertificates": [ { "certFile": "/etc/origin/master/named_certificates/cert.crt", "keyFile": "/etc/origin/master/named_certificates/cert.key", "names": [ "os.ad.scanplus.de" ] } ], "requestTimeoutSeconds": 3600 }, "volumeConfig": { "dynamicProvisioningEnabled": true } } }, "changed": false } TASK [openshift_control_plane : Check for file paths outside of /etc/origin/master in master's config] ********************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:23 Wednesday 09 January 2019 15:53:08 +0100 (0:00:00.206) 0:13:42.951 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "msg": "Aight, configs looking good" } TASK [openshift_control_plane : set_fact] *********************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:28 Wednesday 09 January 2019 15:53:08 +0100 (0:00:00.166) 0:13:43.117 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_master_existing_idproviders": [ { "challenge": true, "login": true, "name": "RH_IPA_LDAP_Auth", "provider": { "apiVersion": "v1", "attributes": { "email": [ "mail" ], "id": [ "sAMAccountName" ], "name": [ "cn" ], "preferredUsername": [ "sAMAccountName" ] }, "bindDN": "CN=osLdapReader,OU=Openshift,OU=ServiceUsers,OU=ScanPlus,DC=ad,DC=scanplus,DC=de", "bindPassword": "3UAL.dMJI4!b", "insecure": true, "kind": "LDAPPasswordIdentityProvider", "url": "ldap://SP-DC01.ad.scanplus.de/OU=ScanPlus,DC=ad,DC=scanplus,DC=de?sAMAccountName?sub?(memberOf=cn=OpenshiftUsers,OU=Openshift,OU=Groups,OU=ScanPlus,DC=ad,DC=scanplus,DC=de)" } } ] }, "changed": false } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:76 Wednesday 09 January 2019 15:53:09 +0100 (0:00:00.163) 0:13:43.281 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:79 Wednesday 09 January 2019 15:53:09 +0100 (0:00:00.141) 0:13:43.422 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "osm_cluster_network_cidr": "172.18.0.0/17", "osm_host_subnet_length": "9" }, "changed": false } META: ran handlers META: ran handlers PLAY [Initialize special first-master variables] **************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:93 Wednesday 09 January 2019 15:53:09 +0100 (0:00:00.164) 0:13:43.586 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_master_config_node_selector": "nodeusage=dev" }, "changed": false } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:102 Wednesday 09 January 2019 15:53:09 +0100 (0:00:00.143) 0:13:43.729 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "first_master_client_binary": "oc", "l_osm_default_node_selector": "nodeusage=dev", "openshift_client_binary": "oc" }, "changed": false } META: ran handlers META: ran handlers PLAY [Disable web console if required] ************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:115 Wednesday 09 January 2019 15:53:09 +0100 (0:00:00.165) 0:13:43.895 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Install packages necessary for installer] ***************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [Determine if chrony is installed] ************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/base_packages.yml:9 Wednesday 09 January 2019 15:53:09 +0100 (0:00:00.129) 0:13:44.024 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (1, '\n{"changed": true, "end": "2019-01-09 15:53:10.252411", "stdout": "package chrony is not installed", "cmd": ["rpm", "-q", "chrony"], "failed": true, "delta": "0:00:00.029738", "stderr": "", "rc": 1, "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "rpm -q chrony", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}, "start": "2019-01-09 15:53:10.222673", "warnings": ["Consider using the yum, dnf or zypper module rather than running rpm. If you need to use command because yum, dnf or zypper is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid of this message."], "msg": "non-zero return code"}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "rpm", "-q", "chrony" ], "delta": "0:00:00.029738", "end": "2019-01-09 15:53:10.252411", "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "rpm -q chrony", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "msg": "non-zero return code", "rc": 1, "start": "2019-01-09 15:53:10.222673", "stderr": "", "stderr_lines": [], "stdout": "package chrony is not installed", "stdout_lines": [ "package chrony is not installed" ] } TASK [Install ntp package] ************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/base_packages.yml:16 Wednesday 09 January 2019 15:53:10 +0100 (0:00:00.572) 0:13:44.597 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["ntp"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["ntp-4.2.6p5-28.el7.x86_64 providing ntp is already installed"], "rc": 0}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "ntp" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "ntp-4.2.6p5-28.el7.x86_64 providing ntp is already installed" ] } TASK [Start and enable ntpd/chronyd] **************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/base_packages.yml:26 Wednesday 09 January 2019 15:53:24 +0100 (0:00:13.652) 0:13:58.249 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:53:24.250590", "stdout": "", "cmd": ["timedatectl", "set-ntp", "true"], "rc": 0, "start": "2019-01-09 15:53:24.205668", "stderr": "", "delta": "0:00:00.044922", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "timedatectl set-ntp true", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "timedatectl", "set-ntp", "true" ], "delta": "0:00:00.044922", "end": "2019-01-09 15:53:24.250590", "invocation": { "module_args": { "_raw_params": "timedatectl set-ntp true", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:53:24.205668", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } TASK [Ensure openshift-ansible installer package deps are installed] ******************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/base_packages.yml:33 Wednesday 09 January 2019 15:53:24 +0100 (0:00:00.404) 0:13:58.654 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["python-ipaddress", "iproute", "dbus-python", "PyYAML", "libsemanage-python", "yum-utils", "python-docker-py"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["python-ipaddress-1.0.16-2.el7.noarch providing python-ipaddress is already installed", "iproute-4.11.0-14.el7.x86_64 providing iproute is already installed", "dbus-python-1.1.1-9.el7.x86_64 providing dbus-python is already installed", "PyYAML-3.10-11.el7.x86_64 providing PyYAML is already installed", "libsemanage-python-2.5-11.el7.x86_64 providing libsemanage-python is already installed", "yum-utils-1.1.31-46.el7_5.noarch providing yum-utils is already installed", "python-docker-2.4.2-1.3.el7.noarch providing python-docker-py is already installed"], "rc": 0}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "python-ipaddress", "iproute", "dbus-python", "PyYAML", "libsemanage-python", "yum-utils", "python-docker-py" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "python-ipaddress-1.0.16-2.el7.noarch providing python-ipaddress is already installed", "iproute-4.11.0-14.el7.x86_64 providing iproute is already installed", "dbus-python-1.1.1-9.el7.x86_64 providing dbus-python is already installed", "PyYAML-3.10-11.el7.x86_64 providing PyYAML is already installed", "libsemanage-python-2.5-11.el7.x86_64 providing libsemanage-python is already installed", "yum-utils-1.1.31-46.el7_5.noarch providing yum-utils is already installed", "python-docker-2.4.2-1.3.el7.noarch providing python-docker-py is already installed" ] } META: ran handlers META: ran handlers PLAY [Initialize cluster facts] ********************************************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [get openshift_current_version] **************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:10 Wednesday 09 January 2019 15:54:29 +0100 (0:01:05.066) 0:15:03.720 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"deployment_type": "openshift-enterprise"}}, "changed": false, "ansible_facts": {"openshift_current_version": "3.11.51"}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_current_version": "3.11.51" }, "changed": false, "invocation": { "module_args": { "deployment_type": "openshift-enterprise" } } } TASK [set_fact openshift_portal_net if present on masters] ****************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:19 Wednesday 09 January 2019 15:54:29 +0100 (0:00:00.466) 0:15:04.187 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_portal_net": "172.18.128.0/17" }, "changed": false } TASK [Gather Cluster facts] ************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:27 Wednesday 09 January 2019 15:54:30 +0100 (0:00:00.288) 0:15:04.476 ***** Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "common", "selevel": null, "regexp": null, "src": null, "local_facts": {"public_ip": "", "hostname": "", "cloudprovider": "", "no_proxy": "", "ip": "", "http_proxy": "", "portal_net": "172.18.128.0/17", "https_proxy": "", "generate_no_proxy_hosts": true, "public_hostname": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"dns_ip": "172.30.80.240", "proxy_mode": "iptables", "nodename": "sp-os-master01.os.ad.scanplus.de", "bootstrapped": true, "sdn_mtu": "1450"}, "builddefaults": {"config": {"BuildDefaults": {"configuration": {"kind": "BuildDefaultsConfig", "resources": {"requests": {}, "limits": {}}, "env": [], "apiVersion": "v1"}}}}, "logging": {"elasticsearch": {"pvc": {}, "ops": {"pvc": {}}}}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "builddefaults", "logging", "cloudprovider", "master", "hosted", "docker", "buildoverrides"]}, "master": {"public_console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "api_port": "8443", "console_port": "8443", "loopback_user": "system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443", "api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "api_use_ssl": true, "console_path": "/console", "sdn_cluster_network_cidr": "172.18.0.0/17", "loopback_context_name": "default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master", "console_use_ssl": true, "console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "cluster_method": "native", "ha": false, "loopback_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "public_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "admission_plugin_config": {"BuildDefaults": {"configuration": {"kind": "BuildDefaultsConfig", "resources": {"requests": {}, "limits": {}}, "env": [], "apiVersion": "v1"}}, "BuildOverrides": {"configuration": {"kind": "BuildOverridesConfig", "apiVersion": "v1"}}, "openshift.io/ImagePolicy": {"configuration": {"kind": "ImagePolicyConfig", "executionRules": [{"skipOnResolutionFailure": true, "matchImageAnnotations": [{"key": "images.openshift.io/deny-execution", "value": "true"}], "reject": true, "name": "execution-denied", "onResources": [{"resource": "pods"}, {"resource": "builds"}]}], "apiVersion": "v1"}}}, "named_certificates": [{"certfile": "/etc/origin/master/named_certificates/cert.crt", "keyfile": "/etc/origin/master/named_certificates/cert.key", "names": ["sp-os-master01.os.ad.scanplus.de"], "cafile": "/etc/origin/master/named_certificates/ca.crt"}], "manage_htpasswd": true, "loopback_cluster_name": "sp-os-master01-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "controllers_port": "8444", "session_name": "ssn"}, "common": {"is_etcd_system_container": false, "ip": "172.30.80.240", "dns_domain": "cluster.local", "is_master_system_container": false, "public_ip": "172.30.80.240", "public_hostname": "sp-os-master01.os.ad.scanplus.de", "internal_hostnames": ["kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "etcd_runtime": "host", "rolling_restart_mode": "services", "hostname": "sp-os-master01.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "is_openvswitch_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "kube_svc_ip": "172.18.128.1", "config_base": "/etc/origin", "all_hostnames": ["kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "is_containerized": false, "no_proxy_etcd_host_ips": "172.30.80.240", "raw_hostname": "sp-os-master01.os.ad.scanplus.de", "portal_net": "172.18.128.0/17", "deployment_type": "openshift-enterprise"}, "hosted": {"templates": {"kubeconfig": "/tmp/openshift-ansible-DNTbe3/admin.kubeconfig"}, "routers": [{"name": "router", "certificate": "{{ openshift_hosted_router_certificate | default({}) }}", "replicas": "{{ replicas | default(1) }}", "serviceaccount": "router", "namespace": "default", "stats_port": 1936, "edits": "{{ openshift_hosted_router_edits }}", "images": "{{ openshift_hosted_router_image | default(None) }}", "selector": "{{ openshift_hosted_router_selector | default(None) }}", "ports": ["80:80", "443:443"]}], "infra": {"selector": "region=infra"}, "registry": {"force": [false], "name": "docker-registry", "serviceaccount": "registry", "edits": [{"action": "put", "value": {"updatePeriodSeconds": 1, "timeoutSeconds": 600, "maxSurge": "25%", "intervalSeconds": 1, "maxUnavailable": "25%"}, "key": "spec.strategy.rollingParams"}], "selector": "region=infra", "cert": {"expire": {"days": 730}}, "env": {"vars": {}}, "volumes": [], "registryurl": "openshift3/ose-${component}:${version}", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}"}, "router": {"certificate": {"certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key", "cafile": "/etc/origin/master/ca.crt"}, "create_certificate": true, "image": "openshift3/ose-${component}:${version}", "selector": "region=infra", "edits": [{"action": "put", "value": 1, "key": "spec.strategy.rollingParams.intervalSeconds"}, {"action": "put", "value": 1, "key": "spec.strategy.rollingParams.updatePeriodSeconds"}, {"action": "put", "value": 21600, "key": "spec.strategy.activeDeadlineSeconds"}], "registryurl": "openshift3/ose-${component}:${version}", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}"}, "docker": {"registry": {"insecure": {"default": "{{ openshift_docker_hosted_registry_insecure | default(False) }}"}}}, "wfp": {"rc": {"phase": {"msg": "All items completed", "changed": true, "results": [{"_ansible_parsed": true, "stderr_lines": [], "rc": 0, "_ansible_item_result": true, "end": "2018-01-31 14:15:11.698797", "_ansible_no_log": false, "stdout": "Complete", "cmd": ["oc", "get", "replicationcontroller", "router-1", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .metadata.annotations.openshift\\\\.io/deployment\\\\.phase }"], "attempts": 1, "item": [{"name": "router", "certificate": {"keyfile": "/etc/origin/master/openshift-router.key", "certfile": "/etc/origin/master/openshift-router.crt", "cafile": "/etc/origin/master/ca.crt"}, "replicas": "2", "namespace": "default", "serviceaccount": "router", "stats_port": 1936, "edits": [{"action": "put", "value": 1, "key": "spec.strategy.rollingParams.intervalSeconds"}, {"action": "put", "value": 1, "key": "spec.strategy.rollingParams.updatePeriodSeconds"}, {"action": "put", "value": 21600, "key": "spec.strategy.activeDeadlineSeconds"}], "images": "openshift3/ose-${component}:${version}", "selector": "region=infra", "ports": ["80:80", "443:443"]}, {"_ansible_parsed": true, "stderr_lines": [], "_ansible_item_result": true, "end": "2018-01-31 14:15:11.096068", "_ansible_no_log": false, "stdout": "1", "cmd": ["oc", "get", "deploymentconfig", "router", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .status.latestVersion }"], "rc": 0, "item": {"name": "router", "certificate": {"certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key", "cafile": "/etc/origin/master/ca.crt"}, "replicas": "2", "namespace": "default", "serviceaccount": "router", "selector": "region=infra", "edits": [{"action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600}], "images": "openshift3/ose-${component}:${version}", "stats_port": 1936, "ports": ["80:80", "443:443"]}, "delta": "0:00:00.196315", "stderr": "", "changed": true, "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc get deploymentconfig router --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath=\'{ .status.latestVersion }\'", "removes": null, "creates": null, "chdir": null, "stdin": null}}, "stdout_lines": ["1"], "start": "2018-01-31 14:15:10.899753", "_ansible_ignore_errors": null, "failed": false}], "delta": "0:00:00.199963", "stderr": "", "changed": true, "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": false, "_raw_params": "oc get replicationcontroller router-1 --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath=\'{ .metadata.annotations.openshift\\\\.io/deployment\\\\.phase }\'", "removes": null, "warn": true, "chdir": null, "stdin": null}}, "stdout_lines": ["Complete"], "failed_when_result": false, "start": "2018-01-31 14:15:11.498834", "_ansible_ignore_errors": null, "failed": false}]}}}}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "buildoverrides": {"config": {"BuildOverrides": {"configuration": {"kind": "BuildOverridesConfig", "apiVersion": "v1"}}}}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "builddefaults": { "config": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } } } }, "buildoverrides": { "config": { "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } } } }, "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-master01.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "ip": "172.30.80.240", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "no_proxy_etcd_host_ips": "172.30.80.240", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-master01.os.ad.scanplus.de", "public_ip": "172.30.80.240", "raw_hostname": "sp-os-master01.os.ad.scanplus.de", "rolling_restart_mode": "services", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "builddefaults", "logging", "cloudprovider", "master", "hosted", "docker", "buildoverrides" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "hosted": { "docker": { "registry": { "insecure": { "default": "{{ openshift_docker_hosted_registry_insecure | default(False) }}" } } }, "infra": { "selector": "region=infra" }, "registry": { "cert": { "expire": { "days": 730 } }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams", "value": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 } } ], "env": { "vars": {} }, "force": [ false ], "name": "docker-registry", "registryurl": "openshift3/ose-${component}:${version}", "selector": "region=infra", "serviceaccount": "registry", "volumes": [], "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}" }, "router": { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "create_certificate": true, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "image": "openshift3/ose-${component}:${version}", "registryurl": "openshift3/ose-${component}:${version}", "selector": "region=infra", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}" }, "routers": [ { "certificate": "{{ openshift_hosted_router_certificate | default({}) }}", "edits": "{{ openshift_hosted_router_edits }}", "images": "{{ openshift_hosted_router_image | default(None) }}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "{{ replicas | default(1) }}", "selector": "{{ openshift_hosted_router_selector | default(None) }}", "serviceaccount": "router", "stats_port": 1936 } ], "templates": { "kubeconfig": "/tmp/openshift-ansible-DNTbe3/admin.kubeconfig" }, "wfp": { "rc": { "phase": { "changed": true, "msg": "All items completed", "results": [ { "_ansible_ignore_errors": null, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "attempts": 1, "changed": true, "cmd": [ "oc", "get", "replicationcontroller", "router-1", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .metadata.annotations.openshift\\.io/deployment\\.phase }" ], "delta": "0:00:00.199963", "end": "2018-01-31 14:15:11.698797", "failed": false, "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc get replicationcontroller router-1 --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath='{ .metadata.annotations.openshift\\.io/deployment\\.phase }'", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": [ { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "images": "openshift3/ose-${component}:${version}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "2", "selector": "region=infra", "serviceaccount": "router", "stats_port": 1936 }, { "_ansible_ignore_errors": null, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": true, "cmd": [ "oc", "get", "deploymentconfig", "router", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .status.latestVersion }" ], "delta": "0:00:00.196315", "end": "2018-01-31 14:15:11.096068", "failed": false, "invocation": { "module_args": { "_raw_params": "oc get deploymentconfig router --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath='{ .status.latestVersion }'", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "images": "openshift3/ose-${component}:${version}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "2", "selector": "region=infra", "serviceaccount": "router", "stats_port": 1936 }, "rc": 0, "start": "2018-01-31 14:15:10.899753", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": [ "1" ] } ], "rc": 0, "start": "2018-01-31 14:15:11.498834", "stderr": "", "stderr_lines": [], "stdout": "Complete", "stdout_lines": [ "Complete" ] } ] } } } }, "logging": { "elasticsearch": { "ops": { "pvc": {} }, "pvc": {} } }, "master": { "admission_plugin_config": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } }, "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } }, "openshift.io/ImagePolicy": { "configuration": { "apiVersion": "v1", "executionRules": [ { "matchImageAnnotations": [ { "key": "images.openshift.io/deny-execution", "value": "true" } ], "name": "execution-denied", "onResources": [ { "resource": "pods" }, { "resource": "builds" } ], "reject": true, "skipOnResolutionFailure": true } ], "kind": "ImagePolicyConfig" } } }, "api_port": "8443", "api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "cluster_method": "native", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "ha": false, "loopback_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-master01-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443", "manage_htpasswd": true, "named_certificates": [ { "cafile": "/etc/origin/master/named_certificates/ca.crt", "certfile": "/etc/origin/master/named_certificates/cert.crt", "keyfile": "/etc/origin/master/named_certificates/cert.key", "names": [ "sp-os-master01.os.ad.scanplus.de" ] } ], "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "sdn_cluster_network_cidr": "172.18.0.0/17", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.30.80.240", "nodename": "sp-os-master01.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "cloudprovider": "", "generate_no_proxy_hosts": true, "hostname": "", "http_proxy": "", "https_proxy": "", "ip": "", "no_proxy": "", "portal_net": "172.18.128.0/17", "public_hostname": "", "public_ip": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "common", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } TASK [Set fact of no_proxy_internal_hostnames] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:42 Wednesday 09 January 2019 15:54:31 +0100 (0:00:01.031) 0:15:05.507 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Initialize openshift.node.sdn_mtu] ************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:60 Wednesday 09 January 2019 15:54:31 +0100 (0:00:00.116) 0:15:05.623 ***** Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "node", "selevel": null, "regexp": null, "src": null, "local_facts": {"sdn_mtu": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"dns_ip": "172.30.80.240", "proxy_mode": "iptables", "nodename": "sp-os-master01.os.ad.scanplus.de", "bootstrapped": true, "sdn_mtu": "1450"}, "builddefaults": {"config": {"BuildDefaults": {"configuration": {"kind": "BuildDefaultsConfig", "resources": {"requests": {}, "limits": {}}, "env": [], "apiVersion": "v1"}}}}, "logging": {"elasticsearch": {"pvc": {}, "ops": {"pvc": {}}}}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "builddefaults", "logging", "cloudprovider", "master", "hosted", "docker", "buildoverrides"]}, "master": {"public_console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "api_port": "8443", "console_port": "8443", "loopback_user": "system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443", "api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "api_use_ssl": true, "console_path": "/console", "sdn_cluster_network_cidr": "172.18.0.0/17", "loopback_context_name": "default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master", "console_use_ssl": true, "console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "cluster_method": "native", "ha": false, "loopback_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "public_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "admission_plugin_config": {"BuildDefaults": {"configuration": {"kind": "BuildDefaultsConfig", "resources": {"requests": {}, "limits": {}}, "env": [], "apiVersion": "v1"}}, "BuildOverrides": {"configuration": {"kind": "BuildOverridesConfig", "apiVersion": "v1"}}, "openshift.io/ImagePolicy": {"configuration": {"kind": "ImagePolicyConfig", "executionRules": [{"skipOnResolutionFailure": true, "matchImageAnnotations": [{"key": "images.openshift.io/deny-execution", "value": "true"}], "reject": true, "name": "execution-denied", "onResources": [{"resource": "pods"}, {"resource": "builds"}]}], "apiVersion": "v1"}}}, "named_certificates": [{"certfile": "/etc/origin/master/named_certificates/cert.crt", "keyfile": "/etc/origin/master/named_certificates/cert.key", "names": ["sp-os-master01.os.ad.scanplus.de"], "cafile": "/etc/origin/master/named_certificates/ca.crt"}], "manage_htpasswd": true, "loopback_cluster_name": "sp-os-master01-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "controllers_port": "8444", "session_name": "ssn"}, "common": {"is_etcd_system_container": false, "ip": "172.30.80.240", "dns_domain": "cluster.local", "is_master_system_container": false, "public_ip": "172.30.80.240", "public_hostname": "sp-os-master01.os.ad.scanplus.de", "internal_hostnames": ["kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "etcd_runtime": "host", "rolling_restart_mode": "services", "hostname": "sp-os-master01.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "is_openvswitch_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "kube_svc_ip": "172.18.128.1", "config_base": "/etc/origin", "all_hostnames": ["kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "is_containerized": false, "no_proxy_etcd_host_ips": "172.30.80.240", "raw_hostname": "sp-os-master01.os.ad.scanplus.de", "portal_net": "172.18.128.0/17", "deployment_type": "openshift-enterprise"}, "hosted": {"templates": {"kubeconfig": "/tmp/openshift-ansible-DNTbe3/admin.kubeconfig"}, "routers": [{"name": "router", "certificate": "{{ openshift_hosted_router_certificate | default({}) }}", "replicas": "{{ replicas | default(1) }}", "serviceaccount": "router", "namespace": "default", "stats_port": 1936, "edits": "{{ openshift_hosted_router_edits }}", "images": "{{ openshift_hosted_router_image | default(None) }}", "selector": "{{ openshift_hosted_router_selector | default(None) }}", "ports": ["80:80", "443:443"]}], "infra": {"selector": "region=infra"}, "registry": {"force": [false], "name": "docker-registry", "serviceaccount": "registry", "edits": [{"action": "put", "value": {"updatePeriodSeconds": 1, "timeoutSeconds": 600, "maxSurge": "25%", "intervalSeconds": 1, "maxUnavailable": "25%"}, "key": "spec.strategy.rollingParams"}], "selector": "region=infra", "cert": {"expire": {"days": 730}}, "env": {"vars": {}}, "volumes": [], "registryurl": "openshift3/ose-${component}:${version}", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}"}, "router": {"certificate": {"certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key", "cafile": "/etc/origin/master/ca.crt"}, "create_certificate": true, "image": "openshift3/ose-${component}:${version}", "selector": "region=infra", "edits": [{"action": "put", "value": 1, "key": "spec.strategy.rollingParams.intervalSeconds"}, {"action": "put", "value": 1, "key": "spec.strategy.rollingParams.updatePeriodSeconds"}, {"action": "put", "value": 21600, "key": "spec.strategy.activeDeadlineSeconds"}], "registryurl": "openshift3/ose-${component}:${version}", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}"}, "docker": {"registry": {"insecure": {"default": "{{ openshift_docker_hosted_registry_insecure | default(False) }}"}}}, "wfp": {"rc": {"phase": {"msg": "All items completed", "changed": true, "results": [{"_ansible_parsed": true, "stderr_lines": [], "rc": 0, "_ansible_item_result": true, "end": "2018-01-31 14:15:11.698797", "_ansible_no_log": false, "stdout": "Complete", "cmd": ["oc", "get", "replicationcontroller", "router-1", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .metadata.annotations.openshift\\\\.io/deployment\\\\.phase }"], "attempts": 1, "item": [{"name": "router", "certificate": {"keyfile": "/etc/origin/master/openshift-router.key", "certfile": "/etc/origin/master/openshift-router.crt", "cafile": "/etc/origin/master/ca.crt"}, "replicas": "2", "namespace": "default", "serviceaccount": "router", "stats_port": 1936, "edits": [{"action": "put", "value": 1, "key": "spec.strategy.rollingParams.intervalSeconds"}, {"action": "put", "value": 1, "key": "spec.strategy.rollingParams.updatePeriodSeconds"}, {"action": "put", "value": 21600, "key": "spec.strategy.activeDeadlineSeconds"}], "images": "openshift3/ose-${component}:${version}", "selector": "region=infra", "ports": ["80:80", "443:443"]}, {"_ansible_parsed": true, "stderr_lines": [], "_ansible_item_result": true, "end": "2018-01-31 14:15:11.096068", "_ansible_no_log": false, "stdout": "1", "cmd": ["oc", "get", "deploymentconfig", "router", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .status.latestVersion }"], "rc": 0, "item": {"name": "router", "certificate": {"certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key", "cafile": "/etc/origin/master/ca.crt"}, "replicas": "2", "namespace": "default", "serviceaccount": "router", "selector": "region=infra", "edits": [{"action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600}], "images": "openshift3/ose-${component}:${version}", "stats_port": 1936, "ports": ["80:80", "443:443"]}, "delta": "0:00:00.196315", "stderr": "", "changed": true, "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc get deploymentconfig router --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath=\'{ .status.latestVersion }\'", "removes": null, "creates": null, "chdir": null, "stdin": null}}, "stdout_lines": ["1"], "start": "2018-01-31 14:15:10.899753", "_ansible_ignore_errors": null, "failed": false}], "delta": "0:00:00.199963", "stderr": "", "changed": true, "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": false, "_raw_params": "oc get replicationcontroller router-1 --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath=\'{ .metadata.annotations.openshift\\\\.io/deployment\\\\.phase }\'", "removes": null, "warn": true, "chdir": null, "stdin": null}}, "stdout_lines": ["Complete"], "failed_when_result": false, "start": "2018-01-31 14:15:11.498834", "_ansible_ignore_errors": null, "failed": false}]}}}}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "buildoverrides": {"config": {"BuildOverrides": {"configuration": {"kind": "BuildOverridesConfig", "apiVersion": "v1"}}}}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "builddefaults": { "config": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } } } }, "buildoverrides": { "config": { "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } } } }, "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-master01.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "ip": "172.30.80.240", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "no_proxy_etcd_host_ips": "172.30.80.240", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-master01.os.ad.scanplus.de", "public_ip": "172.30.80.240", "raw_hostname": "sp-os-master01.os.ad.scanplus.de", "rolling_restart_mode": "services", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "builddefaults", "logging", "cloudprovider", "master", "hosted", "docker", "buildoverrides" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "hosted": { "docker": { "registry": { "insecure": { "default": "{{ openshift_docker_hosted_registry_insecure | default(False) }}" } } }, "infra": { "selector": "region=infra" }, "registry": { "cert": { "expire": { "days": 730 } }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams", "value": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 } } ], "env": { "vars": {} }, "force": [ false ], "name": "docker-registry", "registryurl": "openshift3/ose-${component}:${version}", "selector": "region=infra", "serviceaccount": "registry", "volumes": [], "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}" }, "router": { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "create_certificate": true, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "image": "openshift3/ose-${component}:${version}", "registryurl": "openshift3/ose-${component}:${version}", "selector": "region=infra", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}" }, "routers": [ { "certificate": "{{ openshift_hosted_router_certificate | default({}) }}", "edits": "{{ openshift_hosted_router_edits }}", "images": "{{ openshift_hosted_router_image | default(None) }}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "{{ replicas | default(1) }}", "selector": "{{ openshift_hosted_router_selector | default(None) }}", "serviceaccount": "router", "stats_port": 1936 } ], "templates": { "kubeconfig": "/tmp/openshift-ansible-DNTbe3/admin.kubeconfig" }, "wfp": { "rc": { "phase": { "changed": true, "msg": "All items completed", "results": [ { "_ansible_ignore_errors": null, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "attempts": 1, "changed": true, "cmd": [ "oc", "get", "replicationcontroller", "router-1", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .metadata.annotations.openshift\\.io/deployment\\.phase }" ], "delta": "0:00:00.199963", "end": "2018-01-31 14:15:11.698797", "failed": false, "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc get replicationcontroller router-1 --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath='{ .metadata.annotations.openshift\\.io/deployment\\.phase }'", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": [ { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "images": "openshift3/ose-${component}:${version}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "2", "selector": "region=infra", "serviceaccount": "router", "stats_port": 1936 }, { "_ansible_ignore_errors": null, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": true, "cmd": [ "oc", "get", "deploymentconfig", "router", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .status.latestVersion }" ], "delta": "0:00:00.196315", "end": "2018-01-31 14:15:11.096068", "failed": false, "invocation": { "module_args": { "_raw_params": "oc get deploymentconfig router --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath='{ .status.latestVersion }'", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "images": "openshift3/ose-${component}:${version}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "2", "selector": "region=infra", "serviceaccount": "router", "stats_port": 1936 }, "rc": 0, "start": "2018-01-31 14:15:10.899753", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": [ "1" ] } ], "rc": 0, "start": "2018-01-31 14:15:11.498834", "stderr": "", "stderr_lines": [], "stdout": "Complete", "stdout_lines": [ "Complete" ] } ] } } } }, "logging": { "elasticsearch": { "ops": { "pvc": {} }, "pvc": {} } }, "master": { "admission_plugin_config": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } }, "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } }, "openshift.io/ImagePolicy": { "configuration": { "apiVersion": "v1", "executionRules": [ { "matchImageAnnotations": [ { "key": "images.openshift.io/deny-execution", "value": "true" } ], "name": "execution-denied", "onResources": [ { "resource": "pods" }, { "resource": "builds" } ], "reject": true, "skipOnResolutionFailure": true } ], "kind": "ImagePolicyConfig" } } }, "api_port": "8443", "api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "cluster_method": "native", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "ha": false, "loopback_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-master01-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443", "manage_htpasswd": true, "named_certificates": [ { "cafile": "/etc/origin/master/named_certificates/ca.crt", "certfile": "/etc/origin/master/named_certificates/cert.crt", "keyfile": "/etc/origin/master/named_certificates/cert.key", "names": [ "sp-os-master01.os.ad.scanplus.de" ] } ], "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "sdn_cluster_network_cidr": "172.18.0.0/17", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.30.80.240", "nodename": "sp-os-master01.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "sdn_mtu": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "node", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } TASK [set_fact l_kubelet_node_name] ***************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:65 Wednesday 09 January 2019 15:54:32 +0100 (0:00:00.972) 0:15:06.595 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "l_kubelet_node_name": "sp-os-master01.os.ad.scanplus.de" }, "changed": false } META: ran handlers META: ran handlers PLAY [Initialize etcd host variables] *************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:78 Wednesday 09 January 2019 15:54:32 +0100 (0:00:00.140) 0:15:06.735 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_master_etcd_hosts": [ "sp-os-master01.os.ad.scanplus.de" ], "openshift_master_etcd_port": "2379", "openshift_no_proxy_etcd_host_ips": "172.30.80.240" }, "changed": false } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:89 Wednesday 09 January 2019 15:54:32 +0100 (0:00:00.463) 0:15:07.199 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_master_etcd_urls": [ "https://sp-os-master01.os.ad.scanplus.de:2379" ] }, "changed": false } META: ran handlers META: ran handlers PLAY [Inspect cluster certificates] ***************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [openshift_certificate_expiry : Ensure python dateutil library is present] ********************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_certificate_expiry/tasks/main.yml:2 Wednesday 09 January 2019 15:54:33 +0100 (0:00:00.420) 0:15:07.620 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["python-dateutil"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["python-dateutil-1.5-7.el7.noarch providing python-dateutil is already installed"], "rc": 0}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "python-dateutil" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "python-dateutil-1.5-7.el7.noarch providing python-dateutil is already installed" ] } TASK [openshift_certificate_expiry : Check cert expirys on host] ************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_certificate_expiry/tasks/main.yml:8 Wednesday 09 January 2019 15:54:46 +0100 (0:00:13.310) 0:15:20.930 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/openshift_cert_expiry.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": false, "warn_certs": false, "summary": {"ok": 16, "system_certificates": 9, "registry_certs": 1, "etcd_certificates": 3, "warning": 0, "router_certs": 1, "kubeconfig_certificates": 2, "total": 16, "expired": 0}, "msg": "Checked 16 total certificates. Expired/Warning/OK: 0/0/16. Warning window: 365 days", "rc": 0, "invocation": {"module_args": {"config_base": "/etc/origin", "show_all": false, "warning_days": 365}}, "check_results": {"ocp_certs": [], "meta": {"warn_before_date": "2020-01-09 15:54:47.018982", "show_all": "False", "checked_at_time": "2019-01-09 15:54:47.018982", "warning_days": 365}, "registry": [], "etcd": [], "kubeconfigs": [], "router": []}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "check_results": { "etcd": [], "kubeconfigs": [], "meta": { "checked_at_time": "2019-01-09 15:54:47.018982", "show_all": "False", "warn_before_date": "2020-01-09 15:54:47.018982", "warning_days": 365 }, "ocp_certs": [], "registry": [], "router": [] }, "invocation": { "module_args": { "config_base": "/etc/origin", "show_all": false, "warning_days": 365 } }, "msg": "Checked 16 total certificates. Expired/Warning/OK: 0/0/16. Warning window: 365 days", "rc": 0, "summary": { "etcd_certificates": 3, "expired": 0, "kubeconfig_certificates": 2, "ok": 16, "registry_certs": 1, "router_certs": 1, "system_certificates": 9, "total": 16, "warning": 0 }, "warn_certs": false } TASK [openshift_certificate_expiry : Generate expiration report HTML] ******************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_certificate_expiry/tasks/main.yml:15 Wednesday 09 January 2019 15:54:47 +0100 (0:00:00.969) 0:15:21.900 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_certificate_expiry : Generate results JSON file] ************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_certificate_expiry/tasks/main.yml:26 Wednesday 09 January 2019 15:54:47 +0100 (0:00:00.116) 0:15:22.016 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_certificate_expiry : Fail when certs are near or already expired] ******************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_certificate_expiry/tasks/main.yml:39 Wednesday 09 January 2019 15:54:47 +0100 (0:00:00.121) 0:15:22.137 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Ensure firewall is not switched during upgrade] *********************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [set currently installed version] ************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/init.yml:24 Wednesday 09 January 2019 15:54:48 +0100 (0:00:00.120) 0:15:22.258 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_currently_installed_version": "3.11.51" }, "changed": false } TASK [Get iptable service details] ****************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/init.yml:28 Wednesday 09 January 2019 15:54:48 +0100 (0:00:00.228) 0:15:22.487 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"ExecStart": "{ path=/usr/libexec/iptables/iptables.init ; argv[]=/usr/libexec/iptables/iptables.init start ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TimeoutStopUSec": "1min 30s", "ControlGroup": "/system.slice/iptables.service", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ExecMainCode": "1", "UnitFileState": "enabled", "ExecMainPID": "996", "LimitSIGPENDING": "63379", "FileDescriptorStoreMax": "0", "LoadState": "loaded", "ProtectHome": "no", "TTYVTDisallocate": "no", "TTYVHangup": "no", "WatchdogTimestampMonotonic": "0", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "16320198", "AllowIsolate": "no", "AssertTimestamp": "Wed 2019-01-09 09:37:14 CET", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "Environment": "BOOTUP=serial CONSOLETYPE=serial", "ActiveEnterTimestamp": "Wed 2019-01-09 09:37:15 CET", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "NoNewPrivileges": "no", "Before": "ip6tables.service shutdown.target docker.service network.service", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "16102566", "SendSIGHUP": "no", "TimeoutStartUSec": "0", "Type": "oneshot", "SyslogPriority": "30", "SameProcessGroup": "no", "LimitNPROC": "63379", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "ExecMainStartTimestamp": "Wed 2019-01-09 09:37:14 CET", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "0", "StartLimitBurst": "5", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "ExecReload": "{ path=/usr/libexec/iptables/iptables.init ; argv[]=/usr/libexec/iptables/iptables.init reload ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/usr/libexec/iptables/iptables.init ; argv[]=/usr/libexec/iptables/iptables.init stop ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "TasksCurrent": "18446744073709551615", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "exited", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestampMonotonic": "16100765", "MainPID": "0", "StartupBlockIOWeight": "18446744073709551615", "InactiveExitTimestamp": "Wed 2019-01-09 09:37:14 CET", "FragmentPath": "/usr/lib/systemd/system/iptables.service", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "active", "Nice": "0", "LimitDATA": "18446744073709551615", "UnitFilePreset": "disabled", "MemoryCurrent": "0", "LimitRTTIME": "18446744073709551615", "WantedBy": "basic.target docker.service", "SecureBits": "0", "RestartUSec": "100ms", "ConditionTimestamp": "Wed 2019-01-09 09:37:14 CET", "CPUAccounting": "yes", "RemainAfterExit": "yes", "PrivateNetwork": "no", "Restart": "no", "CPUSchedulingPolicy": "0", "LimitNOFILE": "4096", "SendSIGKILL": "yes", "StatusErrno": "0", "RefuseManualStop": "no", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "StartLimitInterval": "10000000", "StandardInput": "null", "AssertTimestampMonotonic": "16100765", "DefaultDependencies": "yes", "Requires": "basic.target", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "Slice": "system.slice", "ExecMainExitTimestampMonotonic": "16316111", "ConsistsOf": "docker.service", "NotifyAccess": "none", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "PrivateTmp": "no", "OnFailureJobMode": "replace", "AssertResult": "yes", "LimitLOCKS": "18446744073709551615", "ExecMainStartTimestampMonotonic": "16102475", "StandardError": "syslog", "Wants": "system.slice", "After": "system.slice syslog.target systemd-journald.socket basic.target", "FailureAction": "none", "CanIsolate": "no", "Conflicts": "shutdown.target", "StandardOutput": "syslog", "MountFlags": "0", "InactiveEnterTimestampMonotonic": "0", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "Transient": "no", "IOScheduling": "0", "Description": "IPv4 firewall with iptables", "ActiveExitTimestampMonotonic": "0", "ExecMainExitTimestamp": "Wed 2019-01-09 09:37:15 CET", "CanReload": "yes", "ControlPID": "0", "LimitNICE": "0", "BlockIOWeight": "18446744073709551615", "Names": "iptables.service", "ProtectSystem": "no", "PrivateDevices": "no", "Id": "iptables.service"}, "invocation": {"module_args": {"no_block": false, "force": null, "name": "iptables", "enabled": null, "daemon_reload": false, "state": null, "user": false, "masked": null}}, "changed": false, "name": "iptables"}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "daemon_reload": false, "enabled": null, "force": null, "masked": null, "name": "iptables", "no_block": false, "state": null, "user": false } }, "name": "iptables", "status": { "ActiveEnterTimestamp": "Wed 2019-01-09 09:37:15 CET", "ActiveEnterTimestampMonotonic": "16320198", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "system.slice syslog.target systemd-journald.socket basic.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Wed 2019-01-09 09:37:14 CET", "AssertTimestampMonotonic": "16100765", "Before": "ip6tables.service shutdown.target docker.service network.service", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2019-01-09 09:37:14 CET", "ConditionTimestampMonotonic": "16100765", "Conflicts": "shutdown.target", "ConsistsOf": "docker.service", "ControlGroup": "/system.slice/iptables.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "IPv4 firewall with iptables", "DevicePolicy": "auto", "Environment": "BOOTUP=serial CONSOLETYPE=serial", "ExecMainCode": "1", "ExecMainExitTimestamp": "Wed 2019-01-09 09:37:15 CET", "ExecMainExitTimestampMonotonic": "16316111", "ExecMainPID": "996", "ExecMainStartTimestamp": "Wed 2019-01-09 09:37:14 CET", "ExecMainStartTimestampMonotonic": "16102475", "ExecMainStatus": "0", "ExecReload": "{ path=/usr/libexec/iptables/iptables.init ; argv[]=/usr/libexec/iptables/iptables.init reload ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/libexec/iptables/iptables.init ; argv[]=/usr/libexec/iptables/iptables.init start ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/usr/libexec/iptables/iptables.init ; argv[]=/usr/libexec/iptables/iptables.init stop ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/iptables.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "iptables.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Wed 2019-01-09 09:37:14 CET", "InactiveExitTimestampMonotonic": "16102566", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "63379", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "63379", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "yes", "MemoryCurrent": "0", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "iptables.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "yes", "Requires": "basic.target", "Restart": "no", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "syslog", "StandardInput": "null", "StandardOutput": "syslog", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "basic.target docker.service", "Wants": "system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [Set fact os_firewall_use_firewalld FALSE for iptables] **************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/init.yml:34 Wednesday 09 January 2019 15:54:48 +0100 (0:00:00.308) 0:15:22.795 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "os_firewall_use_firewalld": false }, "changed": false } META: ran handlers META: ran handlers PLAY [Configure the upgrade target for the common upgrade tasks 3.11] ******************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/v3_11/upgrade_control_plane_part2.yml:24 Wednesday 09 January 2019 15:54:48 +0100 (0:00:00.142) 0:15:22.937 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_release": "3.11", "openshift_upgrade_min": "3.10", "openshift_upgrade_target": "3.11" }, "changed": false } META: ran handlers META: ran handlers PLAY [Filter list of nodes to be upgraded if necessary] ********************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [Retrieve list of openshift nodes matching upgrade label] ************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml:11 Wednesday 09 January 2019 15:54:48 +0100 (0:00:00.142) 0:15:23.080 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Fail if no nodes match openshift_upgrade_nodes_label] ***************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml:18 Wednesday 09 January 2019 15:54:48 +0100 (0:00:00.113) 0:15:23.194 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Map labelled nodes to inventory hosts] ******************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml:25 Wednesday 09 January 2019 15:54:49 +0100 (0:00:00.111) 0:15:23.305 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-master01.os.ad.scanplus.de) => { "changed": false, "item": "sp-os-master01.os.ad.scanplus.de", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-infra01.os.ad.scanplus.de) => { "changed": false, "item": "sp-os-infra01.os.ad.scanplus.de", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-infra02.os.ad.scanplus.de) => { "changed": false, "item": "sp-os-infra02.os.ad.scanplus.de", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node02.os.ad.scanplus.de) => { "changed": false, "item": "sp-os-node02.os.ad.scanplus.de", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node03.os.ad.scanplus.de) => { "changed": false, "item": "sp-os-node03.os.ad.scanplus.de", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node04.os.ad.scanplus.de) => { "changed": false, "item": "sp-os-node04.os.ad.scanplus.de", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node05.os.ad.scanplus.de) => { "changed": false, "item": "sp-os-node05.os.ad.scanplus.de", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node06.os.ad.scanplus.de) => { "changed": false, "item": "sp-os-node06.os.ad.scanplus.de", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node07.os.ad.scanplus.de) => { "changed": false, "item": "sp-os-node07.os.ad.scanplus.de", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node08.os.ad.scanplus.de) => { "changed": false, "item": "sp-os-node08.os.ad.scanplus.de", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node09.os.ad.scanplus.de) => { "changed": false, "item": "sp-os-node09.os.ad.scanplus.de", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node10.os.ad.scanplus.de) => { "changed": false, "item": "sp-os-node10.os.ad.scanplus.de", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node11.os.ad.scanplus.de) => { "changed": false, "item": "sp-os-node11.os.ad.scanplus.de", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node12.os.ad.scanplus.de) => { "changed": false, "item": "sp-os-node12.os.ad.scanplus.de", "skip_reason": "Conditional result was False" } TASK [Evaluate oo_nodes_to_upgrade] ***************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml:39 Wednesday 09 January 2019 15:54:49 +0100 (0:00:00.192) 0:15:23.498 ***** creating host via 'add_host': hostname=sp-os-master01.os.ad.scanplus.de ok: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-master01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_upgrade" ], "host_name": "sp-os-master01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-master01.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-infra01.os.ad.scanplus.de ok: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-infra01.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_upgrade" ], "host_name": "sp-os-infra01.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-infra01.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-infra02.os.ad.scanplus.de ok: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-infra02.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_upgrade" ], "host_name": "sp-os-infra02.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-infra02.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node02.os.ad.scanplus.de ok: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node02.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_upgrade" ], "host_name": "sp-os-node02.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node02.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node03.os.ad.scanplus.de ok: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node03.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_upgrade" ], "host_name": "sp-os-node03.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node03.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node04.os.ad.scanplus.de ok: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node04.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_upgrade" ], "host_name": "sp-os-node04.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node04.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node05.os.ad.scanplus.de ok: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node05.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_upgrade" ], "host_name": "sp-os-node05.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node05.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node06.os.ad.scanplus.de ok: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node06.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_upgrade" ], "host_name": "sp-os-node06.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node06.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node07.os.ad.scanplus.de ok: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node07.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_upgrade" ], "host_name": "sp-os-node07.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node07.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node08.os.ad.scanplus.de ok: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node08.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_upgrade" ], "host_name": "sp-os-node08.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node08.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node09.os.ad.scanplus.de ok: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node09.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_upgrade" ], "host_name": "sp-os-node09.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node09.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node10.os.ad.scanplus.de ok: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node10.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_upgrade" ], "host_name": "sp-os-node10.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node10.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node11.os.ad.scanplus.de ok: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node11.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_upgrade" ], "host_name": "sp-os-node11.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node11.os.ad.scanplus.de" } creating host via 'add_host': hostname=sp-os-node12.os.ad.scanplus.de ok: [sp-os-master01.os.ad.scanplus.de] => (item=sp-os-node12.os.ad.scanplus.de) => { "add_host": { "groups": [ "oo_nodes_to_upgrade" ], "host_name": "sp-os-node12.os.ad.scanplus.de", "host_vars": {} }, "changed": false, "item": "sp-os-node12.os.ad.scanplus.de" } META: ran handlers META: ran handlers PLAY [Update repos on upgrade hosts] **************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [openshift_repos : Ensure libselinux-python is installed] ************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_repos/tasks/main.yaml:6 Wednesday 09 January 2019 15:54:49 +0100 (0:00:00.567) 0:15:24.065 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["libselinux-python"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["libselinux-python-2.5-12.el7.x86_64 providing libselinux-python is already installed"], "rc": 0}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "libselinux-python" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "libselinux-python-2.5-12.el7.x86_64 providing libselinux-python is already installed" ] } TASK [openshift_repos : Remove openshift_additional.repo file] ************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_repos/tasks/main.yaml:13 Wednesday 09 January 2019 15:55:03 +0100 (0:00:13.177) 0:15:37.242 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": null, "path": "/etc/yum.repos.d/openshift_additional.repo", "owner": null, "follow": true, "group": null, "unsafe_writes": null, "state": "absent", "content": null, "serole": null, "setype": null, "dest": "/etc/yum.repos.d/openshift_additional.repo", "selevel": null, "regexp": null, "src": null, "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "path": "/etc/yum.repos.d/openshift_additional.repo", "state": "absent", "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "_diff_peek": null, "_original_basename": null, "attributes": null, "backup": null, "content": null, "delimiter": null, "dest": "/etc/yum.repos.d/openshift_additional.repo", "directory_mode": null, "follow": true, "force": false, "group": null, "mode": null, "owner": null, "path": "/etc/yum.repos.d/openshift_additional.repo", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "absent", "unsafe_writes": null } }, "path": "/etc/yum.repos.d/openshift_additional.repo", "state": "absent" } TASK [openshift_repos : Create any additional repos that are defined] ******************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_repos/tasks/main.yaml:18 Wednesday 09 January 2019 15:55:03 +0100 (0:00:00.378) 0:15:37.621 ***** TASK [openshift_repos : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_repos/tasks/main.yaml:39 Wednesday 09 January 2019 15:55:03 +0100 (0:00:00.111) 0:15:37.732 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_repos : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_repos/tasks/main.yaml:45 Wednesday 09 January 2019 15:55:03 +0100 (0:00:00.124) 0:15:37.856 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_repos : Ensure clean repo cache in the event repos have been changed manually] ****************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_repos/tasks/main.yaml:52 Wednesday 09 January 2019 15:55:03 +0100 (0:00:00.120) 0:15:37.977 ***** NOTIFIED HANDLER openshift_repos : refresh cache for sp-os-master01.os.ad.scanplus.de changed: [sp-os-master01.os.ad.scanplus.de] => { "msg": "First run of openshift_repos" } TASK [openshift_repos : Record that openshift_repos already ran] ************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_repos/tasks/main.yaml:58 Wednesday 09 January 2019 15:55:03 +0100 (0:00:00.145) 0:15:38.122 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "r_openshift_repos_has_run": true }, "changed": false } RUNNING HANDLER [openshift_repos : refresh cache] *************************************************************************************************************************************************************************************************************************************************************************** Wednesday 09 January 2019 15:55:03 +0100 (0:00:00.054) 0:15:38.177 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:55:09.206431", "stdout": "Loaded plugins: product-id, search-disabled-repos, subscription-manager\\nCleaning repos: rhel-7-fast-datapath-rpms rhel-7-server-ansible-2.4-rpms\\n : rhel-7-server-ansible-2.6-rpms rhel-7-server-extras-rpms\\n : rhel-7-server-ose-3.11-rpms rhel-7-server-rpms\\nCleaning up everything\\nMaybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos", "cmd": ["yum", "clean", "all"], "rc": 0, "start": "2019-01-09 15:55:04.196234", "stderr": "", "delta": "0:00:05.010197", "invocation": {"module_args": {"warn": false, "executable": null, "_uses_shell": false, "_raw_params": "yum clean all", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "yum", "clean", "all" ], "delta": "0:00:05.010197", "end": "2019-01-09 15:55:09.206431", "invocation": { "module_args": { "_raw_params": "yum clean all", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": false } }, "rc": 0, "start": "2019-01-09 15:55:04.196234", "stderr": "", "stderr_lines": [], "stdout": "Loaded plugins: product-id, search-disabled-repos, subscription-manager\nCleaning repos: rhel-7-fast-datapath-rpms rhel-7-server-ansible-2.4-rpms\n : rhel-7-server-ansible-2.6-rpms rhel-7-server-extras-rpms\n : rhel-7-server-ose-3.11-rpms rhel-7-server-rpms\nCleaning up everything\nMaybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos", "stdout_lines": [ "Loaded plugins: product-id, search-disabled-repos, subscription-manager", "Cleaning repos: rhel-7-fast-datapath-rpms rhel-7-server-ansible-2.4-rpms", " : rhel-7-server-ansible-2.6-rpms rhel-7-server-extras-rpms", " : rhel-7-server-ose-3.11-rpms rhel-7-server-rpms", "Cleaning up everything", "Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos" ] } META: ran handlers META: ran handlers META: ran handlers PLAY [Set openshift_no_proxy_internal_hostnames] **************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/config.yml:16 Wednesday 09 January 2019 15:55:09 +0100 (0:00:05.384) 0:15:43.561 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Disable excluders] **************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [openshift_excluder : Debug r_openshift_excluder_enable_docker_excluder] *********************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/main.yml:4 Wednesday 09 January 2019 15:55:09 +0100 (0:00:00.123) 0:15:43.684 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "r_openshift_excluder_enable_docker_excluder": true } TASK [openshift_excluder : Debug r_openshift_excluder_enable_openshift_excluder] ******************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/main.yml:8 Wednesday 09 January 2019 15:55:09 +0100 (0:00:00.140) 0:15:43.825 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "r_openshift_excluder_enable_openshift_excluder": true } TASK [openshift_excluder : Fail if invalid openshift_excluder_action provided] ********************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/main.yml:12 Wednesday 09 January 2019 15:55:09 +0100 (0:00:00.152) 0:15:43.978 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_excluder : Fail if r_openshift_excluder_upgrade_target is not defined] ************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/main.yml:17 Wednesday 09 January 2019 15:55:09 +0100 (0:00:00.112) 0:15:44.091 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_excluder : Include main action task file] ******************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/main.yml:24 Wednesday 09 January 2019 15:55:10 +0100 (0:00:00.228) 0:15:44.319 ***** statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_upgrade.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/install.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml statically imported: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml included: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/disable.yml for sp-os-master01.os.ad.scanplus.de TASK [openshift_excluder : Get available excluder version] ****************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:4 Wednesday 09 January 2019 15:55:10 +0100 (0:00:00.261) 0:15:44.580 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/repoquery.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"retries": 4, "verbose": false, "name": "atomic-openshift-docker-excluder-3.11*", "ignore_excluders": true, "query_type": "repos", "retry_interval": 5, "match_version": null, "state": "list", "show_duplicates": false}}, "state": "list", "changed": false, "check_mode": false, "results": {"package_found": true, "cmd": "/usr/bin/repoquery --plugins --quiet --pkgnarrow=repos --queryformat=%{version}|%{release}|%{arch}|%{repo}|%{version}-%{release} --config=/tmp/tmp07Vh6C atomic-openshift-docker-excluder-3.11*", "returncode": 0, "package_name": "atomic-openshift-docker-excluder-3.11*", "versions": {"latest_full": "3.11.51-1.git.0.1560686.el7", "available_versions": ["3.11.51"], "available_versions_full": ["3.11.51-1.git.0.1560686.el7"], "latest": "3.11.51"}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "check_mode": false, "invocation": { "module_args": { "ignore_excluders": true, "match_version": null, "name": "atomic-openshift-docker-excluder-3.11*", "query_type": "repos", "retries": 4, "retry_interval": 5, "show_duplicates": false, "state": "list", "verbose": false } }, "results": { "cmd": "/usr/bin/repoquery --plugins --quiet --pkgnarrow=repos --queryformat=%{version}|%{release}|%{arch}|%{repo}|%{version}-%{release} --config=/tmp/tmp07Vh6C atomic-openshift-docker-excluder-3.11*", "package_found": true, "package_name": "atomic-openshift-docker-excluder-3.11*", "returncode": 0, "versions": { "available_versions": [ "3.11.51" ], "available_versions_full": [ "3.11.51-1.git.0.1560686.el7" ], "latest": "3.11.51", "latest_full": "3.11.51-1.git.0.1560686.el7" } }, "state": "list" } TASK [openshift_excluder : Fail when excluder package is not found] ********************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:10 Wednesday 09 January 2019 15:55:32 +0100 (0:00:22.584) 0:16:07.165 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_excluder : Set fact excluder_version] *********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:15 Wednesday 09 January 2019 15:55:33 +0100 (0:00:00.118) 0:16:07.283 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "excluder_version": "3.11.51" }, "changed": false } TASK [openshift_excluder : atomic-openshift-docker-excluder version detected] *********************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:19 Wednesday 09 January 2019 15:55:33 +0100 (0:00:00.148) 0:16:07.432 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "msg": "atomic-openshift-docker-excluder: 3.11.51" } TASK [openshift_excluder : Printing upgrade target version] ***************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:23 Wednesday 09 January 2019 15:55:33 +0100 (0:00:00.149) 0:16:07.581 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "msg": "3.11" } TASK [openshift_excluder : Check the available atomic-openshift-docker-excluder version is at most of the upgrade target version] ******************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:27 Wednesday 09 January 2019 15:55:33 +0100 (0:00:00.142) 0:16:07.724 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_excluder : Get available excluder version] ****************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:4 Wednesday 09 January 2019 15:55:33 +0100 (0:00:00.119) 0:16:07.844 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/repoquery.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"retries": 4, "verbose": false, "name": "atomic-openshift-excluder-3.11*", "ignore_excluders": true, "query_type": "repos", "retry_interval": 5, "match_version": null, "state": "list", "show_duplicates": false}}, "state": "list", "changed": false, "check_mode": false, "results": {"package_found": true, "cmd": "/usr/bin/repoquery --plugins --quiet --pkgnarrow=repos --queryformat=%{version}|%{release}|%{arch}|%{repo}|%{version}-%{release} --config=/tmp/tmpByH6xV atomic-openshift-excluder-3.11*", "returncode": 0, "package_name": "atomic-openshift-excluder-3.11*", "versions": {"latest_full": "3.11.51-1.git.0.1560686.el7", "available_versions": ["3.11.51"], "available_versions_full": ["3.11.51-1.git.0.1560686.el7"], "latest": "3.11.51"}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "check_mode": false, "invocation": { "module_args": { "ignore_excluders": true, "match_version": null, "name": "atomic-openshift-excluder-3.11*", "query_type": "repos", "retries": 4, "retry_interval": 5, "show_duplicates": false, "state": "list", "verbose": false } }, "results": { "cmd": "/usr/bin/repoquery --plugins --quiet --pkgnarrow=repos --queryformat=%{version}|%{release}|%{arch}|%{repo}|%{version}-%{release} --config=/tmp/tmpByH6xV atomic-openshift-excluder-3.11*", "package_found": true, "package_name": "atomic-openshift-excluder-3.11*", "returncode": 0, "versions": { "available_versions": [ "3.11.51" ], "available_versions_full": [ "3.11.51-1.git.0.1560686.el7" ], "latest": "3.11.51", "latest_full": "3.11.51-1.git.0.1560686.el7" } }, "state": "list" } TASK [openshift_excluder : Fail when excluder package is not found] ********************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:10 Wednesday 09 January 2019 15:55:58 +0100 (0:00:25.195) 0:16:33.039 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_excluder : Set fact excluder_version] *********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:15 Wednesday 09 January 2019 15:55:58 +0100 (0:00:00.120) 0:16:33.160 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "excluder_version": "3.11.51" }, "changed": false } TASK [openshift_excluder : atomic-openshift-excluder version detected] ****************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:19 Wednesday 09 January 2019 15:55:59 +0100 (0:00:00.177) 0:16:33.337 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "msg": "atomic-openshift-excluder: 3.11.51" } TASK [openshift_excluder : Printing upgrade target version] ***************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:23 Wednesday 09 January 2019 15:55:59 +0100 (0:00:00.163) 0:16:33.500 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "msg": "3.11" } TASK [openshift_excluder : Check the available atomic-openshift-excluder version is at most of the upgrade target version] ************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:27 Wednesday 09 January 2019 15:55:59 +0100 (0:00:00.306) 0:16:33.807 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_excluder : Check for docker-excluder] *********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:6 Wednesday 09 January 2019 15:55:59 +0100 (0:00:00.249) 0:16:34.056 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/sbin/atomic-openshift-docker-excluder", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1543815009.0, "block_size": 4096, "inode": 272454, "isgid": false, "size": 2471, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": true, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/sbin/atomic-openshift-docker-excluder", "xusr": true, "atime": 1547019503.6686108, "isdir": false, "ctime": 1547019503.4886074, "isblk": false, "xgrp": false, "dev": 64769, "roth": true, "isfifo": false, "mode": "0744", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/sbin/atomic-openshift-docker-excluder" } }, "stat": { "atime": 1547019503.6686108, "block_size": 4096, "blocks": 8, "ctime": 1547019503.4886074, "dev": 64769, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 272454, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0744", "mtime": 1543815009.0, "nlink": 1, "path": "/sbin/atomic-openshift-docker-excluder", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 2471, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [openshift_excluder : disable docker excluder] ************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:14 Wednesday 09 January 2019 15:56:00 +0100 (0:00:00.290) 0:16:34.346 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:56:00.379196", "stdout": "", "cmd": ["/sbin/atomic-openshift-docker-excluder", "unexclude"], "rc": 0, "start": "2019-01-09 15:56:00.332805", "stderr": "", "delta": "0:00:00.046391", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "/sbin/atomic-openshift-docker-excluder unexclude", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "/sbin/atomic-openshift-docker-excluder", "unexclude" ], "delta": "0:00:00.046391", "end": "2019-01-09 15:56:00.379196", "invocation": { "module_args": { "_raw_params": "/sbin/atomic-openshift-docker-excluder unexclude", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:56:00.332805", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } TASK [openshift_excluder : Check for openshift excluder] ******************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:20 Wednesday 09 January 2019 15:56:00 +0100 (0:00:00.369) 0:16:34.716 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/sbin/atomic-openshift-excluder", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1543815009.0, "block_size": 4096, "inode": 264171, "isgid": false, "size": 2605, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": true, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/sbin/atomic-openshift-excluder", "xusr": true, "atime": 1547019552.1145446, "isdir": false, "ctime": 1547019551.988542, "isblk": false, "xgrp": false, "dev": 64769, "roth": true, "isfifo": false, "mode": "0744", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/sbin/atomic-openshift-excluder" } }, "stat": { "atime": 1547019552.1145446, "block_size": 4096, "blocks": 8, "ctime": 1547019551.988542, "dev": 64769, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 264171, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0744", "mtime": 1543815009.0, "nlink": 1, "path": "/sbin/atomic-openshift-excluder", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 2605, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [openshift_excluder : disable openshift excluder] ********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:28 Wednesday 09 January 2019 15:56:00 +0100 (0:00:00.313) 0:16:35.030 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:56:01.022281", "stdout": "", "cmd": ["/sbin/atomic-openshift-excluder", "unexclude"], "rc": 0, "start": "2019-01-09 15:56:00.990205", "stderr": "", "delta": "0:00:00.032076", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "/sbin/atomic-openshift-excluder unexclude", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "/sbin/atomic-openshift-excluder", "unexclude" ], "delta": "0:00:00.032076", "end": "2019-01-09 15:56:01.022281", "invocation": { "module_args": { "_raw_params": "/sbin/atomic-openshift-excluder unexclude", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:56:00.990205", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } TASK [openshift_excluder : Install docker excluder - yum] ******************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/install.yml:9 Wednesday 09 January 2019 15:56:01 +0100 (0:00:00.333) 0:16:35.363 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic-openshift-docker-excluder-3.11**"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "latest", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["All packages providing atomic-openshift-docker-excluder-3.11** are up to date", ""], "rc": 0}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic-openshift-docker-excluder-3.11**" ], "security": false, "skip_broken": false, "state": "latest", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "All packages providing atomic-openshift-docker-excluder-3.11** are up to date", "" ] } TASK [openshift_excluder : Install docker excluder - dnf] ******************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/install.yml:24 Wednesday 09 January 2019 15:56:34 +0100 (0:00:33.125) 0:17:08.488 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_excluder : Install openshift excluder - yum] **************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/install.yml:34 Wednesday 09 January 2019 15:56:34 +0100 (0:00:00.132) 0:17:08.620 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic-openshift-excluder-3.11**"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "latest", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["All packages providing atomic-openshift-excluder-3.11** are up to date", ""], "rc": 0}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic-openshift-excluder-3.11**" ], "security": false, "skip_broken": false, "state": "latest", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "All packages providing atomic-openshift-excluder-3.11** are up to date", "" ] } TASK [openshift_excluder : Install openshift excluder - dnf] **************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/install.yml:48 Wednesday 09 January 2019 15:57:05 +0100 (0:00:31.289) 0:17:39.910 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_excluder : set_fact] **************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/install.yml:58 Wednesday 09 January 2019 15:57:05 +0100 (0:00:00.121) 0:17:40.032 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "r_openshift_excluder_install_ran": true }, "changed": false } TASK [openshift_excluder : Check for docker-excluder] *********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml:3 Wednesday 09 January 2019 15:57:05 +0100 (0:00:00.150) 0:17:40.182 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/sbin/atomic-openshift-docker-excluder", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1543815009.0, "block_size": 4096, "inode": 272454, "isgid": false, "size": 2471, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": true, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/sbin/atomic-openshift-docker-excluder", "xusr": true, "atime": 1547019503.6686108, "isdir": false, "ctime": 1547019503.4886074, "isblk": false, "xgrp": false, "dev": 64769, "roth": true, "isfifo": false, "mode": "0744", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/sbin/atomic-openshift-docker-excluder" } }, "stat": { "atime": 1547019503.6686108, "block_size": 4096, "blocks": 8, "ctime": 1547019503.4886074, "dev": 64769, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 272454, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0744", "mtime": 1543815009.0, "nlink": 1, "path": "/sbin/atomic-openshift-docker-excluder", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 2471, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [openshift_excluder : Check for openshift excluder] ******************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml:11 Wednesday 09 January 2019 15:57:06 +0100 (0:00:00.286) 0:17:40.469 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/sbin/atomic-openshift-excluder", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1543815009.0, "block_size": 4096, "inode": 264171, "isgid": false, "size": 2605, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": true, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/sbin/atomic-openshift-excluder", "xusr": true, "atime": 1547019552.1145446, "isdir": false, "ctime": 1547019551.988542, "isblk": false, "xgrp": false, "dev": 64769, "roth": true, "isfifo": false, "mode": "0744", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/sbin/atomic-openshift-excluder" } }, "stat": { "atime": 1547019552.1145446, "block_size": 4096, "blocks": 8, "ctime": 1547019551.988542, "dev": 64769, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 264171, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0744", "mtime": 1543815009.0, "nlink": 1, "path": "/sbin/atomic-openshift-excluder", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 2605, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [openshift_excluder : Enable docker excluder] ************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml:19 Wednesday 09 January 2019 15:57:06 +0100 (0:00:00.300) 0:17:40.770 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:57:06.768815", "stdout": "", "cmd": ["/sbin/atomic-openshift-docker-excluder", "exclude"], "rc": 0, "start": "2019-01-09 15:57:06.733408", "stderr": "", "delta": "0:00:00.035407", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "/sbin/atomic-openshift-docker-excluder exclude", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "/sbin/atomic-openshift-docker-excluder", "exclude" ], "delta": "0:00:00.035407", "end": "2019-01-09 15:57:06.768815", "invocation": { "module_args": { "_raw_params": "/sbin/atomic-openshift-docker-excluder exclude", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:57:06.733408", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } TASK [openshift_excluder : Enable openshift excluder] *********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml:25 Wednesday 09 January 2019 15:57:06 +0100 (0:00:00.335) 0:17:41.105 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:57:07.257169", "stdout": "", "cmd": ["/sbin/atomic-openshift-excluder", "exclude"], "rc": 0, "start": "2019-01-09 15:57:07.211520", "stderr": "", "delta": "0:00:00.045649", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "/sbin/atomic-openshift-excluder exclude", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "/sbin/atomic-openshift-excluder", "exclude" ], "delta": "0:00:00.045649", "end": "2019-01-09 15:57:07.257169", "invocation": { "module_args": { "_raw_params": "/sbin/atomic-openshift-excluder exclude", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:57:07.211520", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } TASK [openshift_excluder : Check for docker-excluder] *********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:6 Wednesday 09 January 2019 15:57:07 +0100 (0:00:00.485) 0:17:41.591 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/sbin/atomic-openshift-docker-excluder", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1543815009.0, "block_size": 4096, "inode": 272454, "isgid": false, "size": 2471, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": true, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/sbin/atomic-openshift-docker-excluder", "xusr": true, "atime": 1547019503.6686108, "isdir": false, "ctime": 1547019503.4886074, "isblk": false, "xgrp": false, "dev": 64769, "roth": true, "isfifo": false, "mode": "0744", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/sbin/atomic-openshift-docker-excluder" } }, "stat": { "atime": 1547019503.6686108, "block_size": 4096, "blocks": 8, "ctime": 1547019503.4886074, "dev": 64769, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 272454, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0744", "mtime": 1543815009.0, "nlink": 1, "path": "/sbin/atomic-openshift-docker-excluder", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 2471, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [openshift_excluder : disable docker excluder] ************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:14 Wednesday 09 January 2019 15:57:07 +0100 (0:00:00.548) 0:17:42.139 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_excluder : Check for openshift excluder] ******************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:20 Wednesday 09 January 2019 15:57:08 +0100 (0:00:00.112) 0:17:42.252 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/sbin/atomic-openshift-excluder", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1543815009.0, "block_size": 4096, "inode": 264171, "isgid": false, "size": 2605, "wgrp": false, "executable": true, "isuid": false, "readable": true, "isreg": true, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/sbin/atomic-openshift-excluder", "xusr": true, "atime": 1547019552.1145446, "isdir": false, "ctime": 1547019551.988542, "isblk": false, "xgrp": false, "dev": 64769, "roth": true, "isfifo": false, "mode": "0744", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/sbin/atomic-openshift-excluder" } }, "stat": { "atime": 1547019552.1145446, "block_size": 4096, "blocks": 8, "ctime": 1547019551.988542, "dev": 64769, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 264171, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0744", "mtime": 1543815009.0, "nlink": 1, "path": "/sbin/atomic-openshift-excluder", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 2605, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [openshift_excluder : disable openshift excluder] ********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:28 Wednesday 09 January 2019 15:57:08 +0100 (0:00:00.284) 0:17:42.536 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:57:08.552815", "stdout": "", "cmd": ["/sbin/atomic-openshift-excluder", "unexclude"], "rc": 0, "start": "2019-01-09 15:57:08.504428", "stderr": "", "delta": "0:00:00.048387", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "/sbin/atomic-openshift-excluder unexclude", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "/sbin/atomic-openshift-excluder", "unexclude" ], "delta": "0:00:00.048387", "end": "2019-01-09 15:57:08.552815", "invocation": { "module_args": { "_raw_params": "/sbin/atomic-openshift-excluder unexclude", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:57:08.504428", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } META: ran handlers META: ran handlers PLAY [Determine openshift_version to configure on first master] ************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [include_role : openshift_version] ************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/version.yml:5 Wednesday 09 January 2019 15:57:08 +0100 (0:00:00.391) 0:17:42.927 ***** TASK [openshift_version : Use openshift_current_version fact as version to configure if already installed] ****************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:6 Wednesday 09 January 2019 15:57:08 +0100 (0:00:00.213) 0:17:43.141 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_version : Set openshift_version to openshift_release if undefined] ****************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:14 Wednesday 09 January 2019 15:57:09 +0100 (0:00:00.110) 0:17:43.252 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_version : debug] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:21 Wednesday 09 January 2019 15:57:09 +0100 (0:00:00.103) 0:17:43.355 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [openshift_version : set_fact] ***************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:23 Wednesday 09 January 2019 15:57:09 +0100 (0:00:00.104) 0:17:43.459 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_version : debug] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:30 Wednesday 09 January 2019 15:57:09 +0100 (0:00:00.107) 0:17:43.566 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [openshift_version : set_fact] ***************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:32 Wednesday 09 January 2019 15:57:09 +0100 (0:00:00.117) 0:17:43.684 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_version : assert openshift_release in openshift_image_tag] ************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:36 Wednesday 09 January 2019 15:57:09 +0100 (0:00:00.104) 0:17:43.788 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "msg": "All assertions passed" } TASK [openshift_version : assert openshift_release in openshift_pkg_version] ************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:43 Wednesday 09 January 2019 15:57:09 +0100 (0:00:00.135) 0:17:43.924 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "msg": "All assertions passed" } TASK [openshift_version : debug] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:51 Wednesday 09 January 2019 15:57:09 +0100 (0:00:00.140) 0:17:44.064 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "openshift_release": "3.11" } TASK [openshift_version : debug] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:53 Wednesday 09 January 2019 15:57:09 +0100 (0:00:00.145) 0:17:44.209 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "openshift_image_tag": "v3.11" } TASK [openshift_version : debug] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:55 Wednesday 09 January 2019 15:57:10 +0100 (0:00:00.127) 0:17:44.337 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "openshift_pkg_version": "-3.11*" } TASK [openshift_version : debug] ******************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_version/tasks/first_master.yml:57 Wednesday 09 January 2019 15:57:10 +0100 (0:00:00.133) 0:17:44.470 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "openshift_version": "3.11" } META: ran handlers META: ran handlers PLAY [Set openshift_version for etcd, node, and master hosts] *************************************************************************************************************************************************************************************************************************************************************** skipping: no hosts matched PLAY [OpenShift Health Checks] ********************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers META: ran handlers TASK [Run health checks (upgrade)] ****************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/config.yml:45 Wednesday 09 January 2019 15:57:10 +0100 (0:00:00.147) 0:17:44.617 ***** CHECK [disk_availability : sp-os-master01.os.ad.scanplus.de] **************************************************************************************************************************************************************************************************************************************************************** CHECK [docker_image_availability : sp-os-master01.os.ad.scanplus.de] ******************************************************************************************************************************************************************************************************************************************************** Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/docker/docker_image_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"images": [], "invocation": {"module_args": {"tls": false, "cacert_path": null, "name": ["registry.redhat.io/openshift3/ose-haproxy-router:v3.11"], "ssl_version": null, "tls_hostname": "localhost", "docker_host": "unix://var/run/docker.sock", "tls_verify": false, "key_path": null, "timeout": 60, "debug": false, "cert_path": null, "api_version": "auto"}}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/docker/docker_image_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"images": [], "invocation": {"module_args": {"tls": false, "cacert_path": null, "name": ["registry.redhat.io/openshift3/ose-docker-registry:v3.11"], "ssl_version": null, "tls_hostname": "localhost", "docker_host": "unix://var/run/docker.sock", "tls_verify": false, "key_path": null, "timeout": 60, "debug": false, "cert_path": null, "api_version": "auto"}}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/docker/docker_image_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"images": [], "invocation": {"module_args": {"tls": false, "cacert_path": null, "name": ["registry.redhat.io/openshift3/ose-deployer:v3.11"], "ssl_version": null, "tls_hostname": "localhost", "docker_host": "unix://var/run/docker.sock", "tls_verify": false, "key_path": null, "timeout": 60, "debug": false, "cert_path": null, "api_version": "auto"}}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/docker/docker_image_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"images": [], "invocation": {"module_args": {"tls": false, "cacert_path": null, "name": ["registry.redhat.io/openshift3/ose-pod:v3.11"], "ssl_version": null, "tls_hostname": "localhost", "docker_host": "unix://var/run/docker.sock", "tls_verify": false, "key_path": null, "timeout": 60, "debug": false, "cert_path": null, "api_version": "auto"}}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/docker/docker_image_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"images": [], "invocation": {"module_args": {"tls": false, "cacert_path": null, "name": ["registry.redhat.io/openshift3/registry-console:v3.11"], "ssl_version": null, "tls_hostname": "localhost", "docker_host": "unix://var/run/docker.sock", "tls_verify": false, "key_path": null, "timeout": 60, "debug": false, "cert_path": null, "api_version": "auto"}}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/docker/docker_image_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"images": [{"Comment": "", "Container": "cf0ed98938bd285c87df22dc30c4237b908af0e68c49ef3a629a59e1f1cc7054", "DockerVersion": "1.13.1", "Parent": "", "Created": "2018-12-04T06:30:55.775989943Z", "Os": "linux", "Author": "", "RepoDigests": ["registry.redhat.io/openshift3/ose-control-plane@sha256:01e44374022557bcb5976ff43056196db2cfee87e978a972c1b8f2b111c481ca"], "Architecture": "amd64", "Size": 806835916, "RepoTags": ["registry.redhat.io/openshift3/ose-control-plane:v3.11"], "GraphDriver": {"Data": {"LowerDir": "/var/lib/docker/overlay2/628a6da4bdbcc521ff10888a3345c90c6a554cd8ddd489aaf099ec6daa295aa6/diff:/var/lib/docker/overlay2/b746b47fcfa1344fba41cd03d7255465608d2e662fbc6eb3b4e5c0a2b3323b7e/diff:/var/lib/docker/overlay2/e029b60876d2d53a1c52860dd145fcc2355ea0b40e0aa60794a73e4954f6ba81/diff:/var/lib/docker/overlay2/da2b404926da2bffba07ba4f043936b341d7f6963ca4c8a55ba3ae32be87f836/diff", "WorkDir": "/var/lib/docker/overlay2/4e7df7cd2aa71ee45b1c78985e39808ff4da2fbc185d245ed220fb8481918c3f/work", "MergedDir": "/var/lib/docker/overlay2/4e7df7cd2aa71ee45b1c78985e39808ff4da2fbc185d245ed220fb8481918c3f/merged", "UpperDir": "/var/lib/docker/overlay2/4e7df7cd2aa71ee45b1c78985e39808ff4da2fbc185d245ed220fb8481918c3f/diff"}, "Name": "overlay2"}, "ContainerConfig": {"Tty": false, "Hostname": "cf0ed98938bd", "Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "container=oci"], "Domainname": "", "StdinOnce": false, "Image": "sha256:923c3ceafdc228388d0f48828fa11e1257246dd83f64ee622baa23b19277e306", "Cmd": ["sleep 86400"], "WorkingDir": "", "Labels": {"com.redhat.component": "openshift-enterprise-cli-container", "authoritative-source-url": "registry.access.redhat.com", "distribution-scope": "public", "vendor": "Red Hat, Inc.", "description": "OpenShift is a platform for developing, building, and deploying containerized applications.", "License": "GPLv2+", "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/openshift3/ose-cli/images/v3.11.51-2", "io.k8s.display-name": "OpenShift Client", "vcs-type": "git", "build-date": "2018-12-04T05:58:13.799762", "summary": "Provides the latest release of Red Hat Enterprise Linux 7 in a fully featured and supported base image.", "com.redhat.build-host": "cpt-0004.osbs.prod.upshift.rdu2.redhat.com", "version": "v3.11.51", "architecture": "x86_64", "release": "2", "io.openshift.tags": "openshift,cli", "vcs-ref": "c369fc96a1b3b9d2f57b06cbf46cddc15ee537fd", "io.k8s.description": "OpenShift is a platform for developing, building, and deploying containerized applications.", "name": "openshift3/ose-cli"}, "AttachStdin": false, "User": "", "Volumes": null, "Entrypoint": ["/bin/sh", "-c"], "OnBuild": null, "AttachStderr": false, "AttachStdout": false, "OpenStdin": false}, "RootFS": {"Layers": ["sha256:56a763045c4544c21c458f3cd948a46384e4b12f9deacd1aede445af598f6d84", "sha256:ab9227d97750aff30bf47631468e9b69dddcdd4af6a853233d6288d05770fcf8", "sha256:603b77b5e2a8949f6e9835aa9bd7b18d23deb09f649b8dab6681bb6afaf638bd", "sha256:ed0e17126500afa3d0395b950e4de8e77aae6427701991a6f6aec7670bdd1d85", "sha256:e2ff6167111b7db3aa063203ecf7c09ab001cdbdd7780698303889de707dbf77"], "Type": "layers"}, "Config": {"Tty": false, "Hostname": "1d561c58fd2b", "Env": ["HOME=/root", "KUBECONFIG=/var/lib/origin/openshift.local.config/master/admin.kubeconfig", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "container=oci"], "Domainname": "", "StdinOnce": false, "Image": "", "Cmd": null, "WorkingDir": "/var/lib/origin", "ArgsEscaped": true, "Labels": {"com.redhat.component": "openshift-enterprise-container", "authoritative-source-url": "registry.access.redhat.com", "distribution-scope": "public", "vendor": "Red Hat, Inc.", "description": "OpenShift Container Platform is a platform for developing, building, and deploying containerized applications.", "License": "GPLv2+", "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/openshift3/ose-control-plane/images/v3.11.51-2", "io.k8s.display-name": "OpenShift Container Platform Application Platform", "vcs-type": "git", "build-date": "2018-12-04T06:27:45.529866", "summary": "Provides the latest release of Red Hat Enterprise Linux 7 in a fully featured and supported base image.", "com.redhat.build-host": "cpt-0012.osbs.prod.upshift.rdu2.redhat.com", "version": "v3.11.51", "architecture": "x86_64", "release": "2", "io.openshift.tags": "openshift,core", "vcs-ref": "b0b2b882cd5494f9f0a257b48cdbd250e7c54f2c", "io.k8s.description": "OpenShift Container Platform is a platform for developing, building, and deploying containerized applications.", "name": "openshift3/ose-control-plane"}, "AttachStdin": false, "User": "", "Volumes": null, "ExposedPorts": {"53/tcp": {}, "8443/tcp": {}}, "OnBuild": null, "AttachStderr": false, "Entrypoint": ["/usr/bin/openshift"], "AttachStdout": false, "OpenStdin": false}, "Id": "sha256:96ee92cf05eab1e9c0276f19c85adb70d89847a3539905f44d15f5749a48e760", "VirtualSize": 806835916}], "invocation": {"module_args": {"tls": false, "cacert_path": null, "name": ["registry.redhat.io/openshift3/ose-control-plane:v3.11"], "ssl_version": null, "tls_hostname": "localhost", "docker_host": "unix://var/run/docker.sock", "tls_verify": false, "key_path": null, "timeout": 60, "debug": false, "cert_path": null, "api_version": "auto"}}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/docker/docker_image_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"images": [{"Comment": "", "Container": "", "DockerVersion": "1.13.1", "Parent": "", "Created": "2018-11-20T15:56:12.205697Z", "Os": "linux", "Author": "Avesh Agarwal ", "RepoDigests": ["registry.redhat.io/rhel7/etcd@sha256:6f5b73f472277b9b3f66148bf20247e33f04121236ad25715c1c272af29e620c"], "Architecture": "amd64", "Size": 258848031, "RepoTags": ["registry.redhat.io/rhel7/etcd:3.2.22"], "GraphDriver": {"Data": {"LowerDir": "/var/lib/docker/overlay2/1f894b8dde0484187a7dc8badd1b8abd9cb1bb5fdc4672b11d4c2043dd2fded5/diff:/var/lib/docker/overlay2/757cd5416bf61164aa0b7aa24010e1bae00a58c657150fd9e419d9aa97778783/diff", "WorkDir": "/var/lib/docker/overlay2/d9475352ee0bbd9599b20a94a802d19e7ee96d25cd492b613cfaefdc2cefb180/work", "MergedDir": "/var/lib/docker/overlay2/d9475352ee0bbd9599b20a94a802d19e7ee96d25cd492b613cfaefdc2cefb180/merged", "UpperDir": "/var/lib/docker/overlay2/d9475352ee0bbd9599b20a94a802d19e7ee96d25cd492b613cfaefdc2cefb180/diff"}, "Name": "overlay2"}, "ContainerConfig": {"Tty": false, "Hostname": "f7a32a4dc10a", "Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "container=docker"], "Domainname": "", "StdinOnce": false, "Image": "sha256:834d22aff86ebce2fb5457345340fdfed3ca8629bf4be5e466e2bbe066c7edf2", "Cmd": ["/bin/sh", "-c", "rm -f \'/etc/yum.repos.d/extras-rhel-7.6.1-final-test-1-1a884.repo\'"], "WorkingDir": "", "ArgsEscaped": true, "Labels": {"io.k8s.description": "etcd is a distributed reliable key-value store for the most critical data of a distributed system.", "maintainer": "Avesh Agarwal", "run": "/usr/bin/docker run -d $OPT1 -p 4001:4001 -p 7001:7001 -p 2379:2379 -p 2380:2380 --name $NAME $IMAGE $OPT2 $OPT3", "vcs-ref": "739bf3cb5666c29e8d65bb95943b2fbde83e24fa", "authoritative-source-url": "registry.access.redhat.com", "io.k8s.display-name": "etcd", "version": "3.2.22", "usage": "etcd -help ", "com.redhat.component": "etcd-container", "distribution-scope": "public", "vendor": "Red Hat, Inc.", "description": "etcd is a distributed reliable key-value store for the most critical data of a distributed system.", "vcs-type": "git", "com.redhat.build-host": "cpt-0010.osbs.prod.upshift.rdu2.redhat.com", "build-date": "2018-11-20T15:55:51.757023", "name": "rhel7/etcd", "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/rhel7/etcd/images/3.2.22-18", "summary": "A highly-available key value store for shared configuration", "architecture": "x86_64", "install": "/usr/bin/docker run --rm $OPT1 --privileged -v /:/host -e HOST=/host -e NAME=$NAME -e IMAGE=$IMAGE $IMAGE $OPT2 /usr/bin/install.sh $OPT3", "release": "18", "io.openshift.expose-services": "2379:tcp,2380:tcp", "io.openshift.tags": "etcd", "uninstall": "/usr/bin/docker run --rm $OPT1 --privileged -v /:/host -e HOST=/host -e NAME=$NAME -e IMAGE=$IMAGE $IMAGE $OPT2 /usr/bin/uninstall.sh $OPT3"}, "AttachStdin": false, "User": "", "Volumes": null, "ExposedPorts": {"2379/tcp": {}, "7001/tcp": {}, "2380/tcp": {}, "4001/tcp": {}}, "OnBuild": [], "AttachStderr": false, "Entrypoint": null, "AttachStdout": false, "OpenStdin": false}, "RootFS": {"Layers": ["sha256:dd7d5adb4579031663c0489591f9516900e3c64727ca9ad0bc4516265703ac92", "sha256:27e45ca143e19ec3a4f6ff98ffbd470680ddb396c83ae76a9dc5e28ec6ade24d", "sha256:bfd885e7059ce4343d4196d0bf65a1ce0a6e0f529ec183a757f6d2cdd6349d36"], "Type": "layers"}, "Config": {"Tty": false, "Hostname": "f7a32a4dc10a", "Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "container=docker"], "Domainname": "", "StdinOnce": false, "Image": "2eeffe49d824cdeadb6a6c91236dddb1e2e877c3c7fbfce128c17fe586a15b9b", "Cmd": ["/usr/bin/etcd-env.sh", "/usr/bin/etcd"], "WorkingDir": "", "ArgsEscaped": true, "Labels": {"io.k8s.description": "etcd is a distributed reliable key-value store for the most critical data of a distributed system.", "maintainer": "Avesh Agarwal", "run": "/usr/bin/docker run -d $OPT1 -p 4001:4001 -p 7001:7001 -p 2379:2379 -p 2380:2380 --name $NAME $IMAGE $OPT2 $OPT3", "vcs-ref": "739bf3cb5666c29e8d65bb95943b2fbde83e24fa", "authoritative-source-url": "registry.access.redhat.com", "io.k8s.display-name": "etcd", "version": "3.2.22", "usage": "etcd -help ", "com.redhat.component": "etcd-container", "distribution-scope": "public", "vendor": "Red Hat, Inc.", "description": "etcd is a distributed reliable key-value store for the most critical data of a distributed system.", "vcs-type": "git", "com.redhat.build-host": "cpt-0010.osbs.prod.upshift.rdu2.redhat.com", "build-date": "2018-11-20T15:55:51.757023", "name": "rhel7/etcd", "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/rhel7/etcd/images/3.2.22-18", "summary": "A highly-available key value store for shared configuration", "architecture": "x86_64", "install": "/usr/bin/docker run --rm $OPT1 --privileged -v /:/host -e HOST=/host -e NAME=$NAME -e IMAGE=$IMAGE $IMAGE $OPT2 /usr/bin/install.sh $OPT3", "release": "18", "io.openshift.expose-services": "2379:tcp,2380:tcp", "io.openshift.tags": "etcd", "uninstall": "/usr/bin/docker run --rm $OPT1 --privileged -v /:/host -e HOST=/host -e NAME=$NAME -e IMAGE=$IMAGE $IMAGE $OPT2 /usr/bin/uninstall.sh $OPT3"}, "AttachStdin": false, "User": "", "Volumes": null, "ExposedPorts": {"2379/tcp": {}, "7001/tcp": {}, "2380/tcp": {}, "4001/tcp": {}}, "OnBuild": [], "AttachStderr": false, "Entrypoint": null, "AttachStdout": false, "OpenStdin": false}, "Id": "sha256:635bb36d7fc7b0199d318dcb4fde1aaadf5654b9ad4f9a4a3a1c5fe94c23339f", "VirtualSize": 258848031}], "invocation": {"module_args": {"tls": false, "cacert_path": null, "name": ["registry.redhat.io/rhel7/etcd:3.2.22"], "ssl_version": null, "tls_hostname": "localhost", "docker_host": "unix://var/run/docker.sock", "tls_verify": false, "key_path": null, "timeout": 60, "debug": false, "cert_path": null, "api_version": "auto"}}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:57:17.285330", "stdout": "{\\n \\"Name\\": \\"registry.redhat.io/openshift3/ose-docker-registry\\",\\n \\"Digest\\": \\"sha256:378117890c6ad070514013a598efd9fddde952d9323af75d77bda38a85685e6f\\",\\n \\"RepoTags\\": [\\n \\"v3.3.0.34-1\\",\\n \\"v3.3.1.20\\",\\n \\"v3.3.1.25\\",\\n \\"v3.5.5.5\\",\\n \\"v3.8.36\\",\\n \\"v3.10.72\\",\\n \\"v3.3.1.4-1\\",\\n \\"v3.7.9-21\\",\\n \\"v3.9.57\\",\\n \\"v3.10.66\\",\\n \\"v3.2.1.9\\",\\n \\"v3.4.1.24\\",\\n \\"v3.1.1.6-3\\",\\n \\"v3.7.64-2\\",\\n \\"v3.6.173.0.83-2\\",\\n \\"v3.3\\",\\n \\"v3.7.62\\",\\n \\"v3.10.45-2\\",\\n \\"v3.3.1.7-0\\",\\n \\"v3.1.1.10-2\\",\\n \\"v3.4.1.44.11-2\\",\\n \\"v3.10.45-6\\",\\n \\"v3.4\\",\\n \\"v3.5.5.31.80\\",\\n \\"v3.9\\",\\n \\"v3.8\\",\\n \\"v3.3.1.38-2\\",\\n \\"v3.2.1.7-1\\",\\n \\"v3.7.23-3\\",\\n \\"v3.1.1.10\\",\\n \\"v3.9.51-2\\",\\n \\"v3.3.1.11-2\\",\\n \\"v3.5.5.31.36\\",\\n \\"v3.6.173.0.124-2\\",\\n \\"v3.11.51\\",\\n \\"v3.7.54-1\\",\\n \\"v3.9.43\\",\\n \\"v3.8.37\\",\\n \\"v3.9.40\\",\\n \\"v3.11.51-2\\",\\n \\"v3.5.5.8\\",\\n \\"v3.4.1.44.17-14\\",\\n \\"v3.5.5.26-2\\",\\n \\"v3.6.173.0.113-3\\",\\n \\"v3.3.1.14-1\\",\\n \\"v3.5.5.31.48-2\\",\\n \\"v3.2.1.1\\",\\n \\"v3.3.1.25-3\\",\\n \\"v3.2.1.4\\",\\n \\"v3.7.9\\",\\n \\"v3.4.1.44.38-11\\",\\n \\"v3.7.64\\",\\n \\"v3.7.72\\",\\n \\"v3.5.5.26\\",\\n \\"v3.5.5.24\\",\\n \\"v3.6.173.0.117-1\\",\\n \\"v3.6.173.0.5-4\\",\\n \\"v3.6.173.0.5-5\\",\\n \\"v3.3.1.3-1\\",\\n \\"v3.4.1.2-2\\",\\n \\"v3.5.5.5-2\\",\\n \\"v3.6.173.0.5-2\\",\\n \\"v3.1.1.8-1\\",\\n \\"v3.6.173.0.140-2\\",\\n \\"v3.2.1.17-1\\",\\n \\"v3.2.1.26-2\\",\\n \\"v3.11.43\\",\\n \\"v3.4.0.39-2\\",\\n \\"v3.2.0.20\\",\\n \\"v3.9.51\\",\\n \\"v3.5.5.31.66-2\\",\\n \\"v3.7.42-2\\",\\n \\"v3.4.1.16-2\\",\\n \\"v3.4.1.24-3\\",\\n \\"v3.2.0.46\\",\\n \\"v3.2.0.44\\",\\n \\"v3.7.44-3\\",\\n \\"v3.6.173.0.63\\",\\n \\"v3.10.66-4\\",\\n \\"v3.9.25-1\\",\\n \\"v3.6.173.0.96\\",\\n \\"v3.5.5.31.67\\",\\n \\"v3.5.5.31.66\\",\\n \\"v3.5.5.15\\",\\n \\"v3.3.1.19-2\\",\\n \\"v3.6.173.0.49\\",\\n \\"v3.7.46\\",\\n \\"v3.2.0.20-3\\",\\n \\"v3.7.42\\",\\n \\"v3.3.1.46.39\\",\\n \\"v3.3.1.46.39-3\\",\\n \\"v3.7.52-1\\",\\n \\"v3.6.173.0.5\\",\\n \\"v3.6.173.0.140\\",\\n \\"v3.4.1.44.53\\",\\n \\"v3.4.1.44.52\\",\\n \\"v3.10.83\\",\\n \\"v3.4.1.44.57\\",\\n \\"v3.6.173.0.96-2\\",\\n \\"v3.9.31-2\\",\\n \\"v3.5.5.31\\",\\n \\"v3.4.1.37-2\\",\\n \\"v3.2\\",\\n \\"v3.4.1.5\\",\\n \\"v3.2.1.15\\",\\n \\"v3.7.61\\",\\n \\"v3.6.173.0.129-2\\",\\n \\"v3.2.1.4-1\\",\\n \\"v3.3.1.46.45\\",\\n \\"v3.2.1.30-3\\",\\n \\"v3.4.1.18-3\\",\\n \\"v3.7\\",\\n \\"v3.3.1.20-5\\",\\n \\"v3.6\\",\\n \\"v3.4.1.37\\",\\n \\"v3.2.1.9-3\\",\\n \\"v3.8.44\\",\\n \\"v3.5\\",\\n \\"v3.3.1.14\\",\\n \\"v3.10.14\\",\\n \\"v3.7.52\\",\\n \\"v3.10.72-3\\",\\n \\"v3.4.1.44.52-3\\",\\n \\"v3.7.57\\",\\n \\"v3.1.1.11\\",\\n \\"v3.1.1.6\\",\\n \\"v3.6.173.0.126-2\\",\\n \\"v3.6.173.0.130\\",\\n \\"v3.8.36-5\\",\\n \\"v3.9.40-2\\",\\n \\"v3.8.44-2\\",\\n \\"v3.6.173.0.123-2\\",\\n \\"v3.2.1.34-5\\",\\n \\"v3.1.0.4\\",\\n \\"v3.2.1.7\\",\\n \\"v3.2.1.34-3\\",\\n \\"v3.5.5.31.19-2\\",\\n \\"v3.4.1.44\\",\\n \\"v3.2.1.30\\",\\n \\"v3.1.0.4-2\\",\\n \\"v3.1.0.4-1\\",\\n \\"v3.5.5.31.47-10\\",\\n \\"v3.4.0.40-1\\",\\n \\"v3.3.0.35-1\\",\\n \\"v3.3.1.17-2\\",\\n \\"v3.5.5.31.48\\",\\n \\"v3.3.1.17-4\\",\\n \\"v3.4.1.44-2\\",\\n \\"v3.5.5.31.47\\",\\n \\"v3.2.0.46-2\\",\\n \\"v3.4.0.39\\",\\n \\"v3.6.173.0.83\\",\\n \\"v3.2.1.31\\",\\n \\"v3.5.5.31.67-3\\",\\n \\"v3.2.1.21-2\\",\\n \\"v3.4.0.40\\",\\n \\"v3.0.0.1\\",\\n \\"v3.6.173.0.123\\",\\n \\"v3.9.43-2\\",\\n \\"v3.1.1.11-3\\",\\n \\"v3.1.1.11-2\\",\\n \\"v3.11.16\\",\\n \\"v3.1.1.7-0\\",\\n \\"v3.6.173.0.124\\",\\n \\"v3.6.173.0.21\\",\\n \\"v3.3.1.46.11\\",\\n \\"v3.6.173.0.129\\",\\n \\"v3.6.173.0.128\\",\\n \\"v3.0.0.0\\",\\n \\"v3.5.5.31.80-4\\",\\n \\"v3.7.61-2\\",\\n \\"v3.4.1.44.38\\",\\n \\"v3.4.1.44.17\\",\\n \\"v3.4.1.7-2\\",\\n \\"v3.6.173.0.112-3\\",\\n \\"v3.4.1.44.11\\",\\n \\"v3.3.1.11\\",\\n \\"v3.6.173.0.126\\",\\n \\"v3.7.14\\",\\n \\"v3.3.1.17\\",\\n \\"v3.7.14-5\\",\\n \\"v3.4.1.5-2\\",\\n \\"v3.0.1.0\\",\\n \\"v3.4.1.18\\",\\n \\"v3.2.1.28-3\\",\\n \\"v3.3.1.38\\",\\n \\"v3.4.1.7\\",\\n \\"v3.5.5.24-2\\",\\n \\"v3.4.1.16\\",\\n \\"v3.10.34-3\\",\\n \\"v3.4.1.10\\",\\n \\"v3.10.34\\",\\n \\"v3.4.1.12\\",\\n \\"v3.4.1.33\\",\\n \\"v3.3.1.19\\",\\n \\"v3.9.41-2\\",\\n \\"v3.9.14\\",\\n \\"v3.4.1.44.26-4\\",\\n \\"v3.7.62-2\\",\\n \\"v3.2.1.17\\",\\n \\"v3.3.1.5-2\\",\\n \\"v3.6.173.0.63-11\\",\\n \\"v3.6.173.0.117\\",\\n \\"v3.2.1.13-1\\",\\n \\"v3.6.173.0.112\\",\\n \\"v3.6.173.0.113\\",\\n \\"v3.9.57-2\\",\\n \\"v3.2.1.13\\",\\n \\"v3.1.1.8\\",\\n \\"v3.9.41\\",\\n \\"v3.8.37-2\\",\\n \\"v3.2.1.31-2\\",\\n \\"v3.3.0.32-2\\",\\n \\"v3.5.5.31-2\\",\\n \\"v3.1.1.7\\",\\n \\"v3.7.72-2\\",\\n \\"v3.2.1.15-1\\",\\n \\"v3.4.1.12-3\\",\\n \\"v3.9.30\\",\\n \\"v3.7.46-1\\",\\n \\"v3.4.1.44.26\\",\\n \\"v3.9.27-1\\",\\n \\"v3.10.14-10\\",\\n \\"v3.2.1.1-2\\",\\n \\"v3.10.14-13\\",\\n \\"v3.4.1.10-3\\",\\n \\"v3.11.43-2\\",\\n \\"v3.7.23\\",\\n \\"v3.7.54\\",\\n \\"v3.6.173.0.21-17\\",\\n \\"v3.10.45\\",\\n \\"v3.10.72-5\\",\\n \\"v3.5.5.31.24-15\\",\\n \\"v3.9.31\\",\\n \\"v3.11.16-3\\",\\n \\"v3.1.1.6-8\\",\\n \\"v3.1.1.6-9\\",\\n \\"v3.9.25\\",\\n \\"v3.1.1.6-5\\",\\n \\"v3.9.27\\",\\n \\"v3.1.1.6-7\\",\\n \\"v3.1\\",\\n \\"v3.3.1.46.45-2\\",\\n \\"v3.2.1.26\\",\\n \\"v3.2.1.23\\",\\n \\"v3.2.1.21\\",\\n \\"v3.2.1.23-2\\",\\n \\"v3.5.5.31.24\\",\\n \\"v3.6.173.0.49-4\\",\\n \\"v3.2.1.28\\",\\n \\"v3.4.1.33-2\\",\\n \\"v3.7.57-3\\",\\n \\"v3.9.30-2\\",\\n \\"v3.11\\",\\n \\"v3.10\\",\\n \\"v3.3.1.46.11-3\\",\\n \\"v3.3.1.35\\",\\n \\"v3.4.1.44.57-2\\",\\n \\"v3.9.33-3\\",\\n \\"v3.10.14-8\\",\\n \\"v3.2.1.31-4\\",\\n \\"v3.3.1.3\\",\\n \\"v3.3.1.5\\",\\n \\"v3.3.1.4\\",\\n \\"v3.3.1.7\\",\\n \\"v3.5.5.31.19\\",\\n \\"v3.2.1.34-20\\",\\n \\"v3.4.1.2\\",\\n \\"v3.9.14-2\\",\\n \\"v3.10.83-2\\",\\n \\"v3.2.0.44-2\\",\\n \\"v3.5.5.8-3\\",\\n \\"v3.9.33\\",\\n \\"v3.6.173.0.130-1\\",\\n \\"v3.5.5.31.36-4\\",\\n \\"v3.2.1.34\\",\\n \\"v3.0.2.0\\",\\n \\"v3.4.1.44.53-3\\",\\n \\"v3.7.44\\",\\n \\"v3.3.0.34\\",\\n \\"v3.3.0.35\\",\\n \\"v3.6.173.0.128-2\\",\\n \\"latest\\",\\n \\"v3.5.5.15-3\\",\\n \\"v3.1.1.6-6\\",\\n \\"v3.3.0.32\\",\\n \\"v3.3.1.35-2\\"\\n ],\\n \\"Created\\": \\"2018-12-04T06:07:48.690479Z\\",\\n \\"DockerVersion\\": \\"1.13.1\\",\\n \\"Labels\\": {\\n \\"License\\": \\"GPLv2+\\",\\n \\"architecture\\": \\"x86_64\\",\\n \\"authoritative-source-url\\": \\"registry.access.redhat.com\\",\\n \\"build-date\\": \\"2018-12-04T06:03:23.691460\\",\\n \\"com.redhat.build-host\\": \\"cpt-0011.osbs.prod.upshift.rdu2.redhat.com\\",\\n \\"com.redhat.component\\": \\"openshift-enterprise-registry-container\\",\\n \\"description\\": \\"This is a component of OpenShift Container Platform and exposes a container registry that is integrated with the cluster for authentication and management.\\",\\n \\"distribution-scope\\": \\"public\\",\\n \\"io.k8s.description\\": \\"This is a component of OpenShift Container Platform and exposes a container registry that is integrated with the cluster for authentication and management.\\",\\n \\"io.k8s.display-name\\": \\"OpenShift Container Platform Image Registry\\",\\n \\"io.openshift.tags\\": \\"openshift,container,image,registry\\",\\n \\"name\\": \\"openshift3/ose-docker-registry\\",\\n \\"release\\": \\"2\\",\\n \\"summary\\": \\"Provides the latest release of Red Hat Enterprise Linux 7 in a fully featured and supported base image.\\",\\n \\"url\\": \\"https://access.redhat.com/containers/#/registry.access.redhat.com/openshift3/ose-docker-registry/images/v3.11.51-2\\",\\n \\"vcs-ref\\": \\"df75a19c575268e934a3aaba08b937e9cdb42474\\",\\n \\"vcs-type\\": \\"git\\",\\n \\"vendor\\": \\"Red Hat, Inc.\\",\\n \\"version\\": \\"v3.11.51\\"\\n },\\n \\"Architecture\\": \\"amd64\\",\\n \\"Os\\": \\"linux\\",\\n \\"Layers\\": [\\n \\"sha256:23113ae36f8e9d98b1423e44673979132dec59db2805e473e931d83548b0be82\\",\\n \\"sha256:d134b18b98b0d113b7b1194a60efceaa2c06eff41386d6c14b0e44bfe557eee8\\",\\n \\"sha256:e08cb06c2905b3fe45884de4a320ba7becbc2ee0518067440386f516319cf679\\",\\n \\"sha256:f5aad3ec2bce59a26b8ffa009c0fe3629efd4bccfdb9d3751bd69ab9da1e3882\\"\\n ]\\n}", "cmd": " timeout 10 skopeo inspect --tls-verify=true --creds=rhel_scanplus:u1DVDdwQU1tzdE docker://registry.redhat.io/openshift3/ose-docker-registry:v3.11", "rc": 0, "start": "2019-01-09 15:57:13.412200", "stderr": "", "delta": "0:00:03.873130", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "_raw_params": " timeout 10 skopeo inspect --tls-verify=true --creds=rhel_scanplus:u1DVDdwQU1tzdE docker://registry.redhat.io/openshift3/ose-docker-registry:v3.11", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:57:21.153674", "stdout": "{\\n \\"Name\\": \\"registry.redhat.io/openshift3/registry-console\\",\\n \\"Digest\\": \\"sha256:51c26a1fb3cac10a7d6d404ce4f3aeb0df6c0e1d9380432c9fc2952065a6c7be\\",\\n \\"RepoTags\\": [\\n \\"v3.6.173.0.63-11\\",\\n \\"v3.7.9-21\\",\\n \\"v3.10.66\\",\\n \\"3.3-6\\",\\n \\"v3.7.64-2\\",\\n \\"v3.6.173.0.83-2\\",\\n \\"v3.3\\",\\n \\"v3.7.62\\",\\n \\"v3.7.61\\",\\n \\"v3.7\\",\\n \\"v3.10.45-5\\",\\n \\"v3.5\\",\\n \\"v3.4\\",\\n \\"v3.6.173.0.123-2\\",\\n \\"v3.9\\",\\n \\"v3.8\\",\\n \\"v3.7.23-3\\",\\n \\"3.3-8\\",\\n \\"v3.9.51-2\\",\\n \\"v3.4.1.44.38\\",\\n \\"v3.6.173.0.124-2\\",\\n \\"v3.11.51\\",\\n \\"v3.7.54-1\\",\\n \\"3.3-3\\",\\n \\"v3.9.43\\",\\n \\"v3.8.37\\",\\n \\"v3.9.40\\",\\n \\"v3.11.51-2\\",\\n \\"3.3-28\\",\\n \\"3.3-23\\",\\n \\"3.3-22\\",\\n \\"v3.7.9\\",\\n \\"v3.4.1.44.38-10\\",\\n \\"v3.10.72\\",\\n \\"v3.6.173.0.96\\",\\n \\"v3.6.173.0.49\\",\\n \\"v3.7.72\\",\\n \\"3.5.0-33\\",\\n \\"3.3-14\\",\\n \\"v3.6.173.0.117-1\\",\\n \\"v3.6.173.0.5-4\\",\\n \\"v3.6.173.0.5-5\\",\\n \\"v3.6.173.0.5-6\\",\\n \\"3.5.0\\",\\n \\"v3.6.173.0.140-2\\",\\n \\"3.4-13\\",\\n \\"v3.11.43\\",\\n \\"v3.9.57\\",\\n \\"v3.9.51\\",\\n \\"3.4-19\\",\\n \\"v3.7.42-2\\",\\n \\"v3.7.44-2\\",\\n \\"v3.6.173.0.63\\",\\n \\"3.4\\",\\n \\"3.5\\",\\n \\"3.3\\",\\n \\"v3.9.25-1\\",\\n \\"v3.6.173.0.49-4\\",\\n \\"v3.7.44\\",\\n \\"v3.7.46\\",\\n \\"v3.7.42\\",\\n \\"v3.6.173.0.5\\",\\n \\"v3.6.173.0.140\\",\\n \\"v3.10.83\\",\\n \\"v3.6.173.0.96-2\\",\\n \\"v3.9.31-2\\",\\n \\"v3.9.14-2\\",\\n \\"v3.10.45-2\\",\\n \\"v3.6.173.0.129-2\\",\\n \\"v3.6\\",\\n \\"v3.8.44\\",\\n \\"v3.10.14\\",\\n \\"v3.7.52\\",\\n \\"v3.10.72-3\\",\\n \\"v3.7.64\\",\\n \\"v3.7.57\\",\\n \\"v3.7.54\\",\\n \\"v3.10.72-5\\",\\n \\"v3.6.173.0.126-2\\",\\n \\"v3.6.173.0.130\\",\\n \\"v3.9.40-2\\",\\n \\"v3.11.16-4\\",\\n \\"v3.7.14-5\\",\\n \\"3.5-9\\",\\n \\"3.5-7\\",\\n \\"v3.6.173.0.83\\",\\n \\"3.5.0-42\\",\\n \\"v3.9.27-1\\",\\n \\"3.4-9\\",\\n \\"3.4-8\\",\\n \\"3.4-6\\",\\n \\"3.4-4\\",\\n \\"3.4-2\\",\\n \\"v3.9.43-2\\",\\n \\"v3.6.173.0.123\\",\\n \\"v3.6.173.0.124\\",\\n \\"v3.6.173.0.126\\",\\n \\"v3.6.173.0.129\\",\\n \\"v3.6.173.0.128\\",\\n \\"v3.7.23\\",\\n \\"v3.7.61-2\\",\\n \\"v3.6.173.0.112-3\\",\\n \\"3.4-28\\",\\n \\"3.4-27\\",\\n \\"3.4-22\\",\\n \\"3.4-20\\",\\n \\"3.3-1\\",\\n \\"3.5-5\\",\\n \\"v3.11.43-2\\",\\n \\"v3.10.34\\",\\n \\"v3.9.14\\",\\n \\"v3.7.62-2\\",\\n \\"3.3-4\\",\\n \\"v3.6.173.0.117\\",\\n \\"v3.6.173.0.112\\",\\n \\"v3.6.173.0.113\\",\\n \\"v3.9.57-2\\",\\n \\"v3.9.41\\",\\n \\"v3.9.33\\",\\n \\"v3.8.36\\",\\n \\"v3.7.72-2\\",\\n \\"3.3-13\\",\\n \\"3.3-10\\",\\n \\"v3.7.46-1\\",\\n \\"3.3-16\\",\\n \\"v3.6.173.0.21\\",\\n \\"v3.10.14-12\\",\\n \\"3.4-30\\",\\n \\"v3.10.34-3\\",\\n \\"v3.6.173.0.21-17\\",\\n \\"v3.10.45\\",\\n \\"3.5-27\\",\\n \\"3.5-26\\",\\n \\"v3.9.25\\",\\n \\"v3.9.27\\",\\n \\"v3.11.16\\",\\n \\"v3.9.30\\",\\n \\"v3.6.173.0.113-3\\",\\n \\"v3.8.36-2\\",\\n \\"3.5-15\\",\\n \\"3.5-16\\",\\n \\"v3.7.57-4\\",\\n \\"v3.9.30-2\\",\\n \\"v3.11\\",\\n \\"v3.10\\",\\n \\"v3.8.44-2\\",\\n \\"3.5-18\\",\\n \\"v3.9.33-3\\",\\n \\"v3.10.14-9\\",\\n \\"v3.7.52-1\\",\\n \\"v3.10.14-7\\",\\n \\"v3.7.14\\",\\n \\"v3.10.83-2\\",\\n \\"v3.8.37-3\\",\\n \\"v3.9.41-2\\",\\n \\"v3.6.173.0.130-1\\",\\n \\"v3.9.31\\",\\n \\"v3.6.173.0.128-2\\",\\n \\"latest\\"\\n ],\\n \\"Created\\": \\"2018-12-04T04:56:10.495121924Z\\",\\n \\"DockerVersion\\": \\"1.13.1\\",\\n \\"Labels\\": {\\n \\"License\\": \\"GPLv2+\\",\\n \\"architecture\\": \\"x86_64\\",\\n \\"authoritative-source-url\\": \\"registry.access.redhat.com\\",\\n \\"build-date\\": \\"2018-12-04T04:43:51.234230\\",\\n \\"com.redhat.build-host\\": \\"cpt-0011.osbs.prod.upshift.rdu2.redhat.com\\",\\n \\"com.redhat.component\\": \\"registry-console-container\\",\\n \\"description\\": \\"The registry-console image provides a web console for the OpenShift Container Platform stand-alone registry.\\",\\n \\"distribution-scope\\": \\"public\\",\\n \\"io.k8s.description\\": \\"The registry-console image provides a web console for the OpenShift Container Platform stand-alone registry.\\",\\n \\"io.k8s.display-name\\": \\"OpenShift Stand-Alone Registry Console\\",\\n \\"io.openshift.tags\\": \\"openshift,container,image,registry,console\\",\\n \\"name\\": \\"openshift3/registry-console\\",\\n \\"release\\": \\"2\\",\\n \\"summary\\": \\"Provides the latest release of Red Hat Enterprise Linux 7 in a fully featured and supported base image.\\",\\n \\"url\\": \\"https://access.redhat.com/containers/#/registry.access.redhat.com/openshift3/registry-console/images/v3.11.51-2\\",\\n \\"vcs-ref\\": \\"92d96d97d51af478f0adde46dc0d62bc81b11dba\\",\\n \\"vcs-type\\": \\"git\\",\\n \\"vendor\\": \\"Red Hat, Inc.\\",\\n \\"version\\": \\"v3.11.51\\"\\n },\\n \\"Architecture\\": \\"amd64\\",\\n \\"Os\\": \\"linux\\",\\n \\"Layers\\": [\\n \\"sha256:23113ae36f8e9d98b1423e44673979132dec59db2805e473e931d83548b0be82\\",\\n \\"sha256:d134b18b98b0d113b7b1194a60efceaa2c06eff41386d6c14b0e44bfe557eee8\\",\\n \\"sha256:2da2247915c724f54109fe6835c2094336e21a2d22823c91d403e608a49c45db\\"\\n ]\\n}", "cmd": " timeout 10 skopeo inspect --tls-verify=true --creds=rhel_scanplus:u1DVDdwQU1tzdE docker://registry.redhat.io/openshift3/registry-console:v3.11", "rc": 0, "start": "2019-01-09 15:57:17.420849", "stderr": "", "delta": "0:00:03.732825", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "_raw_params": " timeout 10 skopeo inspect --tls-verify=true --creds=rhel_scanplus:u1DVDdwQU1tzdE docker://registry.redhat.io/openshift3/registry-console:v3.11", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:57:24.621203", "stdout": "{\\n \\"Name\\": \\"registry.redhat.io/openshift3/ose-pod\\",\\n \\"Digest\\": \\"sha256:7cae0b38c12a11b07c47d4b50159cfd75b6b1a5620613957c26df10ed01e448d\\",\\n \\"RepoTags\\": [\\n \\"v3.3.1.20\\",\\n \\"v3.3.0.34-1\\",\\n \\"v3.3.1.25\\",\\n \\"v3.5.5.5\\",\\n \\"v3.10.72\\",\\n \\"v3.3.1.4-1\\",\\n \\"v3.1.1.11\\",\\n \\"v3.7.9-21\\",\\n \\"v3.9.57\\",\\n \\"v3.10.66\\",\\n \\"v3.2.1.28-3\\",\\n \\"v3.4.1.24\\",\\n \\"v3.3.1.46.45-2\\",\\n \\"v3.7.64-2\\",\\n \\"v3.6.173.0.83-2\\",\\n \\"v3.3\\",\\n \\"v3.2\\",\\n \\"v3.7.61\\",\\n \\"v3.3.1.7-0\\",\\n \\"v3.7\\",\\n \\"v3.10.45-5\\",\\n \\"v3.5\\",\\n \\"v3.4\\",\\n \\"v3.5.5.31.80\\",\\n \\"v3.9\\",\\n \\"v3.4.1.44\\",\\n \\"v3.3.1.38-2\\",\\n \\"v3.2.1.7-1\\",\\n \\"v3.7.23-3\\",\\n \\"v3.9.51-2\\",\\n \\"v3.3.1.11-2\\",\\n \\"v3.3.1.46.11-3\\",\\n \\"v3.6.173.0.124-2\\",\\n \\"v3.11.51\\",\\n \\"v3.7.54-1\\",\\n \\"v3.6.173.0.63-11\\",\\n \\"v3.9.41\\",\\n \\"v3.8.36\\",\\n \\"v3.11.51-2\\",\\n \\"v3.5.5.8\\",\\n \\"v3.4.1.44.17-14\\",\\n \\"v3.5.5.26-2\\",\\n \\"v3.6.173.0.113-3\\",\\n \\"v3.2.1.21-1\\",\\n \\"v3.5.5.31.48-2\\",\\n \\"v3.1.1.10-2\\",\\n \\"v3.2.1.1\\",\\n \\"v3.2.1.4\\",\\n \\"v3.7.9\\",\\n \\"v3.2.1.7\\",\\n \\"v3.2.1.9\\",\\n \\"v3.6.173.0.49\\",\\n \\"v3.7.72\\",\\n \\"v3.5.5.26\\",\\n \\"v3.5.5.24\\",\\n \\"v3.6.173.0.117-1\\",\\n \\"v3.6.173.0.5-4\\",\\n \\"v3.6.173.0.5-5\\",\\n \\"v3.3.1.3-1\\",\\n \\"v3.4.1.2-2\\",\\n \\"v3.5.5.5-2\\",\\n \\"v3.6.173.0.5-2\\",\\n \\"v3.1.1.8-1\\",\\n \\"v3.6.173.0.140-2\\",\\n \\"v3.2.1.17-1\\",\\n \\"v3.2.1.26-2\\",\\n \\"v3.11.43\\",\\n \\"v3.4.0.39-2\\",\\n \\"v3.2.0.20\\",\\n \\"v3.9.51\\",\\n \\"v3.5.5.31.47-10\\",\\n \\"v3.5.5.31.66-2\\",\\n \\"v3.7.42-2\\",\\n \\"v3.4.1.16-2\\",\\n \\"v3.10\\",\\n \\"v3.4.1.24-3\\",\\n \\"v3.3.1.17-4\\",\\n \\"v3.2.0.46\\",\\n \\"v3.2.0.44\\",\\n \\"v3.7.44-3\\",\\n \\"v3.6.173.0.63\\",\\n \\"v3.8\\",\\n \\"v3.7.57-3\\",\\n \\"v3.10.66-4\\",\\n \\"v3.9.25-1\\",\\n \\"v3.6.173.0.96\\",\\n \\"v3.9.43\\",\\n \\"v3.5.5.31.67\\",\\n \\"v3.6.173.0.49-4\\",\\n \\"v3.5.5.15\\",\\n \\"v3.3.1.19-2\\",\\n \\"v3.2.1.13\\",\\n \\"v3.7.44\\",\\n \\"v3.7.46\\",\\n \\"v3.2.0.20-3\\",\\n \\"v3.7.42\\",\\n \\"v3.3.1.46.39\\",\\n \\"v3.3.1.46.39-3\\",\\n \\"v3.6.173.0.5\\",\\n \\"v3.6.173.0.140\\",\\n \\"v3.4.1.44.53\\",\\n \\"v3.4.1.44.52\\",\\n \\"v3.4.0.39\\",\\n \\"v3.10.83\\",\\n \\"v3.4.1.44.57\\",\\n \\"v3.4.1.44.11-2\\",\\n \\"v3.6.173.0.96-2\\",\\n \\"v3.9.31-2\\",\\n \\"v3.5.5.31\\",\\n \\"v3.2.1.31-4\\",\\n \\"v3.4.1.37-2\\",\\n \\"v3.7.62\\",\\n \\"v3.1.0.4\\",\\n \\"v3.9.14-2\\",\\n \\"v3.1\\",\\n \\"v3.6.173.0.129-2\\",\\n \\"v3.2.1.4-1\\",\\n \\"v3.3.1.46.45\\",\\n \\"v3.2.1.30-3\\",\\n \\"v3.3.1.14-1\\",\\n \\"v3.3.1.20-5\\",\\n \\"v3.6\\",\\n \\"v3.4.1.37\\",\\n \\"v3.5.5.31-2\\",\\n \\"v3.8.44\\",\\n \\"v3.4.1.33\\",\\n \\"v3.10.14\\",\\n \\"v3.7.52\\",\\n \\"v3.10.72-3\\",\\n \\"v3.7.64\\",\\n \\"v3.8.37\\",\\n \\"v3.7.54\\",\\n \\"v3.10.72-5\\",\\n \\"v3.6.173.0.126-2\\",\\n \\"v3.8.36-7\\",\\n \\"v3.6.173.0.130\\",\\n \\"v3.9.40-2\\",\\n \\"v3.4.1.2\\",\\n \\"v3.8.44-2\\",\\n \\"v3.2.1.34-5\\",\\n \\"v3.7.14-5\\",\\n \\"v3.2.1.34-3\\",\\n \\"v3.5.5.31.19-2\\",\\n \\"v3.10.45-2\\",\\n \\"v3.5.5.31.19\\",\\n \\"v3.7.72-2\\",\\n \\"v3.1.0.4-2\\",\\n \\"v3.1.0.4-1\\",\\n \\"v3.3.1.5-2\\",\\n \\"v3.4.0.40-1\\",\\n \\"v3.3.0.35-1\\",\\n \\"v3.3.1.17-2\\",\\n \\"v3.5.5.31.48\\",\\n \\"v3.4.1.44.52-3\\",\\n \\"v3.4.1.44-2\\",\\n \\"v3.5.5.31.47\\",\\n \\"v3.5.5.31.36-4\\",\\n \\"v3.2.0.46-2\\",\\n \\"v3.9.33\\",\\n \\"v3.6.173.0.130-1\\",\\n \\"v3.6.173.0.83\\",\\n \\"v3.2.1.31\\",\\n \\"v3.6.173.0.21\\",\\n \\"v3.4.0.40\\",\\n \\"v3.0.0.1\\",\\n \\"v3.11.16\\",\\n \\"v3.9.43-2\\",\\n \\"v3.1.1.11-3\\",\\n \\"v3.1.1.11-2\\",\\n \\"v3.6.173.0.123\\",\\n \\"v3.6.173.0.124\\",\\n \\"v3.6.173.0.126\\",\\n \\"v3.6.173.0.129\\",\\n \\"v3.4.1.7\\",\\n \\"v3.4.1.5\\",\\n \\"v3.7.23\\",\\n \\"v3.7.61-2\\",\\n \\"v3.4.1.44.38\\",\\n \\"v3.4.1.44.17\\",\\n \\"v3.4.1.7-2\\",\\n \\"v3.6.173.0.112-3\\",\\n \\"v3.4.1.44.11\\",\\n \\"v3.3.1.11\\",\\n \\"v3.3.1.46.11\\",\\n \\"v3.3.1.14\\",\\n \\"v3.3.1.17\\",\\n \\"v3.4.1.5-2\\",\\n \\"v3.0.1.0\\",\\n \\"v3.2.1.9-3\\",\\n \\"v3.4.1.18\\",\\n \\"v3.3.1.35-2\\",\\n \\"v3.6.173.0.128\\",\\n \\"v3.5.5.24-2\\",\\n \\"v3.4.1.16\\",\\n \\"v3.11.43-2\\",\\n \\"v3.4.1.10\\",\\n \\"v3.10.34\\",\\n \\"v3.4.1.12\\",\\n \\"v3.3.1.25-3\\",\\n \\"v3.9.14\\",\\n \\"v3.4.1.44.26-4\\",\\n \\"v3.7.62-2\\",\\n \\"v3.5.5.31.80-4\\",\\n \\"v3.5.5.31.66\\",\\n \\"v3.6.173.0.117\\",\\n \\"v3.2.1.13-1\\",\\n \\"v3.6.173.0.112\\",\\n \\"v3.6.173.0.113\\",\\n \\"v3.9.57-2\\",\\n \\"v3.2.1.30\\",\\n \\"v3.1.1.8\\",\\n \\"v3.2.1.17\\",\\n \\"v3.9.41-2\\",\\n \\"v3.2.1.31-2\\",\\n \\"v3.3.0.32-2\\",\\n \\"v3.4.1.18-4\\",\\n \\"v3.9.40\\",\\n \\"v3.1.1.7\\",\\n \\"v3.1.1.6\\",\\n \\"v3.2.1.15-1\\",\\n \\"v3.4.1.12-3\\",\\n \\"v3.3.1.19\\",\\n \\"v3.4.1.44.38-11\\",\\n \\"v3.7.46-1\\",\\n \\"v3.4.1.44.26\\",\\n \\"v3.9.27-1\\",\\n \\"v3.10.14-10\\",\\n \\"v3.2.1.1-2\\",\\n \\"v3.10.14-13\\",\\n \\"v3.4.1.10-3\\",\\n \\"v3.10.34-3\\",\\n \\"v3.5.5.8-3\\",\\n \\"v3.5.5.31.67-3\\",\\n \\"v3.6.173.0.21-17\\",\\n \\"v3.10.45\\",\\n \\"v3.1.1.10\\",\\n \\"v3.5.5.31.24-15\\",\\n \\"v3.5.5.31.24\\",\\n \\"v3.11.16-3\\",\\n \\"v3.2.1.15\\",\\n \\"v3.1.1.6-8\\",\\n \\"v3.1.1.6-9\\",\\n \\"v3.9.25\\",\\n \\"v3.1.1.6-5\\",\\n \\"v3.9.27\\",\\n \\"v3.1.1.6-3\\",\\n \\"v3.2.1.26\\",\\n \\"v3.2.1.23\\",\\n \\"v3.2.1.21\\",\\n \\"v3.2.1.23-2\\",\\n \\"v3.2.1.28\\",\\n \\"v3.4.1.33-2\\",\\n \\"v3.7.57\\",\\n \\"v3.5.5.31.36\\",\\n \\"v3.9.30-2\\",\\n \\"v3.11\\",\\n \\"v3.3.1.38\\",\\n \\"v3.3.1.35\\",\\n \\"v3.4.1.44.57-2\\",\\n \\"v3.9.33-3\\",\\n \\"v3.10.14-8\\",\\n \\"v3.7.52-1\\",\\n \\"v3.3.1.3\\",\\n \\"v3.3.1.5\\",\\n \\"v3.3.1.4\\",\\n \\"v3.3.1.7\\",\\n \\"v3.0.0.0\\",\\n \\"v3.2.1.34-20\\",\\n \\"v3.6.173.0.123-2\\",\\n \\"v3.7.14\\",\\n \\"v3.10.83-2\\",\\n \\"v3.2.0.44-2\\",\\n \\"v3.8.37-3\\",\\n \\"v3.4.1.44.53-4\\",\\n \\"v3.9.30\\",\\n \\"v3.9.31\\",\\n \\"v3.2.1.34\\",\\n \\"v3.0.2.0\\",\\n \\"v3.1.1.7-0\\",\\n \\"v3.3.0.34\\",\\n \\"v3.3.0.35\\",\\n \\"v3.6.173.0.128-2\\",\\n \\"v3.5.5.15-3\\",\\n \\"v3.1.1.6-6\\",\\n \\"v3.3.0.32\\",\\n \\"latest\\"\\n ],\\n \\"Created\\": \\"2018-12-04T06:07:40.61473383Z\\",\\n \\"DockerVersion\\": \\"1.13.1\\",\\n \\"Labels\\": {\\n \\"License\\": \\"GPLv2+\\",\\n \\"architecture\\": \\"x86_64\\",\\n \\"authoritative-source-url\\": \\"registry.access.redhat.com\\",\\n \\"build-date\\": \\"2018-12-04T06:03:26.058517\\",\\n \\"com.redhat.build-host\\": \\"cpt-0012.osbs.prod.upshift.rdu2.redhat.com\\",\\n \\"com.redhat.component\\": \\"openshift-enterprise-pod-container\\",\\n \\"description\\": \\"This is a component of OpenShift Container Platform and holds on to the shared Linux namespaces within a Pod.\\",\\n \\"distribution-scope\\": \\"public\\",\\n \\"io.k8s.description\\": \\"This is a component of OpenShift Container Platform and holds on to the shared Linux namespaces within a Pod.\\",\\n \\"io.k8s.display-name\\": \\"OpenShift Container Platform Pod Infrastructure\\",\\n \\"io.openshift.tags\\": \\"openshift,pod\\",\\n \\"name\\": \\"openshift3/ose-pod\\",\\n \\"release\\": \\"2\\",\\n \\"summary\\": \\"Provides the latest release of Red Hat Enterprise Linux 7 in a fully featured and supported base image.\\",\\n \\"url\\": \\"https://access.redhat.com/containers/#/registry.access.redhat.com/openshift3/ose-pod/images/v3.11.51-2\\",\\n \\"vcs-ref\\": \\"1ea52decc8f9b5cb2e27bb6f541605e824b7232c\\",\\n \\"vcs-type\\": \\"git\\",\\n \\"vendor\\": \\"Red Hat, Inc.\\",\\n \\"version\\": \\"v3.11.51\\"\\n },\\n \\"Architecture\\": \\"amd64\\",\\n \\"Os\\": \\"linux\\",\\n \\"Layers\\": [\\n \\"sha256:23113ae36f8e9d98b1423e44673979132dec59db2805e473e931d83548b0be82\\",\\n \\"sha256:d134b18b98b0d113b7b1194a60efceaa2c06eff41386d6c14b0e44bfe557eee8\\",\\n \\"sha256:e08cb06c2905b3fe45884de4a320ba7becbc2ee0518067440386f516319cf679\\",\\n \\"sha256:c1c645d2e796c1e5d348d86d62fc0c932e5cd09a0911a09073f2c57b529d8d9f\\"\\n ]\\n}", "cmd": " timeout 10 skopeo inspect --tls-verify=true --creds=rhel_scanplus:u1DVDdwQU1tzdE docker://registry.redhat.io/openshift3/ose-pod:v3.11", "rc": 0, "start": "2019-01-09 15:57:21.306503", "stderr": "", "delta": "0:00:03.314700", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "_raw_params": " timeout 10 skopeo inspect --tls-verify=true --creds=rhel_scanplus:u1DVDdwQU1tzdE docker://registry.redhat.io/openshift3/ose-pod:v3.11", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:57:28.814680", "stdout": "{\\n \\"Name\\": \\"registry.redhat.io/openshift3/ose-haproxy-router\\",\\n \\"Digest\\": \\"sha256:0f7649e378ed258a10a876c07f42c21a4c4cfb914be8930aad69fcf035403988\\",\\n \\"RepoTags\\": [\\n \\"v3.3.0.34-1\\",\\n \\"v3.3.1.20\\",\\n \\"v3.3.1.25\\",\\n \\"v3.5.5.5\\",\\n \\"v3.8.36\\",\\n \\"v3.10.72\\",\\n \\"v3.3.1.4-1\\",\\n \\"v3.1.1.11\\",\\n \\"v3.7.9-21\\",\\n \\"v3.10.66\\",\\n \\"v3.2.1.9\\",\\n \\"v3.4\\",\\n \\"v3.3.1.19-2\\",\\n \\"v3.7.64-2\\",\\n \\"v3.3\\",\\n \\"v3.7.62\\",\\n \\"v3.10.45-2\\",\\n \\"v3.7\\",\\n \\"v3.6\\",\\n \\"v3.5\\",\\n \\"v3.7.64\\",\\n \\"v3.3.1.46.11-3\\",\\n \\"v3.9\\",\\n \\"v3.8\\",\\n \\"v3.3.1.38-2\\",\\n \\"v3.7.23-3\\",\\n \\"v3.1.1.10\\",\\n \\"v3.9.51-2\\",\\n \\"v3.3.1.11-2\\",\\n \\"v3.4.1.12\\",\\n \\"v3.6.173.0.124-2\\",\\n \\"v3.6.173.0.63-11\\",\\n \\"v3.11.51\\",\\n \\"v3.7.54-1\\",\\n \\"v3.9.43\\",\\n \\"v3.9.41\\",\\n \\"v3.9.40\\",\\n \\"v3.11.51-2\\",\\n \\"v3.5.5.8\\",\\n \\"v3.4.1.44.17-14\\",\\n \\"v3.5.5.26-2\\",\\n \\"v3.6.173.0.113-2\\",\\n \\"v3.2.1.21-1\\",\\n \\"v3.6.173.0.96-2\\",\\n \\"v3.5.5.31.48-3\\",\\n \\"v3.2.0.44-2\\",\\n \\"v3.3.1.25-3\\",\\n \\"v3.2.1.4\\",\\n \\"v3.7.9\\",\\n \\"v3.2.1.7\\",\\n \\"v3.9.14\\",\\n \\"v3.6.173.0.49\\",\\n \\"v3.7.72\\",\\n \\"v3.4.1.5-2\\",\\n \\"v3.5.5.24\\",\\n \\"v3.6.173.0.117-1\\",\\n \\"v3.6.173.0.5-4\\",\\n \\"v3.6.173.0.5-5\\",\\n \\"v3.3.1.3-1\\",\\n \\"v3.4.1.2-2\\",\\n \\"v3.5.5.5-2\\",\\n \\"v3.6.173.0.5-2\\",\\n \\"v3.1.1.8-1\\",\\n \\"v3.6.173.0.140-2\\",\\n \\"v3.2.1.17-2\\",\\n \\"v3.2.1.26-2\\",\\n \\"v3.11.43\\",\\n \\"v3.9.57\\",\\n \\"v3.2.0.20\\",\\n \\"v3.9.51\\",\\n \\"v3.5.5.31.66-2\\",\\n \\"v3.7.42-2\\",\\n \\"v3.4.1.16-2\\",\\n \\"v3.7.23\\",\\n \\"v3.2.0.46\\",\\n \\"v3.7.44-2\\",\\n \\"v3.6.173.0.63\\",\\n \\"v3.10.66-4\\",\\n \\"v3.3.1.4\\",\\n \\"v3.9.25-1\\",\\n \\"v3.1.1.10-2\\",\\n \\"v3.5.5.31.80\\",\\n \\"v3.5.5.31.67\\",\\n \\"v3.4.1.24\\",\\n \\"v3.5.5.15\\",\\n \\"v3.6.173.0.83-4\\",\\n \\"v3.7.44\\",\\n \\"v3.7.46\\",\\n \\"v3.2.0.20-3\\",\\n \\"v3.7.42\\",\\n \\"v3.3.1.46.39\\",\\n \\"v3.3.1.46.39-3\\",\\n \\"v3.6.173.0.5\\",\\n \\"v3.6.173.0.140\\",\\n \\"v3.4.1.44.53\\",\\n \\"v3.4.1.44.52\\",\\n \\"v3.10.83\\",\\n \\"v3.4.1.44.57\\",\\n \\"v3.4.1.44.52-3\\",\\n \\"v3.9.31-2\\",\\n \\"v3.5.5.31\\",\\n \\"v3.2.1.31-4\\",\\n \\"v3.4.1.37-2\\",\\n \\"v3.2\\",\\n \\"v3.3.1.7-0\\",\\n \\"v3.1\\",\\n \\"v3.6.173.0.129-2\\",\\n \\"v3.2.1.4-1\\",\\n \\"v3.3.1.46.45\\",\\n \\"v3.2.1.30-3\\",\\n \\"v3.4.1.18-3\\",\\n \\"v3.3.1.14-1\\",\\n \\"v3.9.33-3\\",\\n \\"v3.4.0.40\\",\\n \\"v3.3.1.20-4\\",\\n \\"v3.4.1.44.11-2\\",\\n \\"v3.4.1.37\\",\\n \\"v3.2.1.9-3\\",\\n \\"v3.8.44\\",\\n \\"v3.10.45-6\\",\\n \\"v3.10.14\\",\\n \\"v3.7.52\\",\\n \\"v3.10.72-3\\",\\n \\"v3.2.1.7-1\\",\\n \\"v3.7.57\\",\\n \\"v3.7.54\\",\\n \\"v3.1.1.6\\",\\n \\"v3.11.16-3\\",\\n \\"v3.6.173.0.130\\",\\n \\"v3.9.40-2\\",\\n \\"v3.8.44-2\\",\\n \\"v3.2.1.34-5\\",\\n \\"v3.1.0.4\\",\\n \\"v3.2.1.34-3\\",\\n \\"v3.5.5.31.19-2\\",\\n \\"v3.4.1.44\\",\\n \\"v3.2.0.44\\",\\n \\"v3.1.0.4-2\\",\\n \\"v3.1.0.4-1\\",\\n \\"v3.5.5.31.47-10\\",\\n \\"v3.4.0.40-1\\",\\n \\"v3.3.0.35-1\\",\\n \\"v3.3.1.17-2\\",\\n \\"v3.5.5.31.48\\",\\n \\"v3.3.1.17-4\\",\\n \\"v3.4.1.44-2\\",\\n \\"v3.5.5.31.47\\",\\n \\"v3.2.0.46-2\\",\\n \\"v3.4.0.39\\",\\n \\"v3.6.173.0.130-1\\",\\n \\"v3.6.173.0.83\\",\\n \\"v3.2.1.31\\",\\n \\"v3.6.173.0.21\\",\\n \\"v3.1.1.6-3\\",\\n \\"v3.4.1.24-3\\",\\n \\"v3.10.72-5\\",\\n \\"v3.6.173.0.123\\",\\n \\"v3.4.0.39-2\\",\\n \\"v3.9.43-2\\",\\n \\"v3.1.1.11-3\\",\\n \\"v3.1.1.11-2\\",\\n \\"v3.11.16\\",\\n \\"v3.0.0.1\\",\\n \\"v3.6.173.0.126\\",\\n \\"v3.6.173.0.129\\",\\n \\"v3.6.173.0.128\\",\\n \\"v3.4.1.5\\",\\n \\"v3.5.5.31.80-4\\",\\n \\"v3.7.61-2\\",\\n \\"v3.4.1.44.38\\",\\n \\"v3.4.1.44.17\\",\\n \\"v3.4.1.7-2\\",\\n \\"v3.6.173.0.112-3\\",\\n \\"v3.4.1.44.11\\",\\n \\"v3.3.1.11\\",\\n \\"v3.3.1.46.11\\",\\n \\"v3.5.5.26\\",\\n \\"v3.3.1.14\\",\\n \\"v3.3.1.17\\",\\n \\"v3.0.1.0\\",\\n \\"v3.4.1.18\\",\\n \\"v3.2.1.28-3\\",\\n \\"v3.3.1.38\\",\\n \\"v3.4.1.7\\",\\n \\"v3.5.5.24-2\\",\\n \\"v3.4.1.16\\",\\n \\"v3.10.34-3\\",\\n \\"v3.4.1.10\\",\\n \\"v3.10.34\\",\\n \\"v3.5.5.31.36\\",\\n \\"v3.4.1.33\\",\\n \\"v3.9.41-2\\",\\n \\"v3.6.173.0.96\\",\\n \\"v3.4.1.44.26-4\\",\\n \\"v3.7.62-2\\",\\n \\"v3.2.1.17\\",\\n \\"v3.3.1.5-2\\",\\n \\"v3.4.1.2\\",\\n \\"v3.6.173.0.117\\",\\n \\"v3.2.1.13-1\\",\\n \\"v3.6.173.0.112\\",\\n \\"v3.6.173.0.113\\",\\n \\"v3.9.57-2\\",\\n \\"v3.2.1.13\\",\\n \\"v3.1.1.8\\",\\n \\"v3.8.37\\",\\n \\"v3.8.37-2\\",\\n \\"v3.2.1.31-2\\",\\n \\"v3.3.0.32-2\\",\\n \\"v3.5.5.31-2\\",\\n \\"v3.1.1.7\\",\\n \\"v3.7.72-2\\",\\n \\"v3.2.1.15-1\\",\\n \\"v3.4.1.12-3\\",\\n \\"v3.3.1.19\\",\\n \\"v3.4.1.44.38-11\\",\\n \\"v3.7.46-1\\",\\n \\"v3.4.1.44.26\\",\\n \\"v3.9.27-2\\",\\n \\"v3.9.27-1\\",\\n \\"v3.5.5.31.67-2\\",\\n \\"v3.10.14-10\\",\\n \\"v3.2.1.1-2\\",\\n \\"v3.10.14-13\\",\\n \\"v3.4.1.10-3\\",\\n \\"v3.11.43-2\\",\\n \\"v3.5.5.8-3\\",\\n \\"v3.6.173.0.21-17\\",\\n \\"v3.10.45\\",\\n \\"v3.5.5.31.66\\",\\n \\"v3.5.5.31.24-15\\",\\n \\"v3.5.5.31.24\\",\\n \\"v3.6.173.0.126-2\\",\\n \\"v3.2.1.15\\",\\n \\"v3.1.1.6-8\\",\\n \\"v3.1.1.6-9\\",\\n \\"v3.6.173.0.124\\",\\n \\"v3.9.25\\",\\n \\"v3.1.1.6-5\\",\\n \\"v3.1.1.6-6\\",\\n \\"v3.1.1.6-7\\",\\n \\"v3.8.36-4\\",\\n \\"v3.3.1.46.45-2\\",\\n \\"v3.2.1.26\\",\\n \\"v3.2.1.23\\",\\n \\"v3.2.1.21\\",\\n \\"v3.2.1.23-2\\",\\n \\"v3.6.173.0.49-4\\",\\n \\"v3.2.1.28\\",\\n \\"v3.4.1.33-2\\",\\n \\"v3.7.57-3\\",\\n \\"v3.9.30-2\\",\\n \\"v3.11\\",\\n \\"v3.10\\",\\n \\"v3.3.1.35\\",\\n \\"v3.4.1.44.57-2\\",\\n \\"v3.5.5.31.36-4\\",\\n \\"v3.7.61\\",\\n \\"v3.10.14-8\\",\\n \\"v3.7.52-1\\",\\n \\"v3.3.1.3\\",\\n \\"v3.3.1.5\\",\\n \\"v3.7.14-5\\",\\n \\"v3.3.1.7\\",\\n \\"v3.5.5.31.19\\",\\n \\"v3.6.173.0.123-2\\",\\n \\"v3.7.14\\",\\n \\"v3.10.83-2\\",\\n \\"v3.2.1.1\\",\\n \\"v3.2.1.30\\",\\n \\"v3.9.33\\",\\n \\"v3.9.30\\",\\n \\"v3.9.31\\",\\n \\"v3.2.1.34\\",\\n \\"v3.0.2.0\\",\\n \\"v3.4.1.44.53-3\\",\\n \\"v3.9.14-8\\",\\n \\"v3.0.0.0\\",\\n \\"v3.1.1.7-0\\",\\n \\"v3.3.0.34\\",\\n \\"v3.3.0.35\\",\\n \\"v3.6.173.0.128-2\\",\\n \\"latest\\",\\n \\"v3.5.5.15-3\\",\\n \\"v3.9.27\\",\\n \\"v3.3.0.32\\",\\n \\"v3.3.1.35-2\\"\\n ],\\n \\"Created\\": \\"2018-12-04T06:25:59.306881921Z\\",\\n \\"DockerVersion\\": \\"1.13.1\\",\\n \\"Labels\\": {\\n \\"License\\": \\"GPLv2+\\",\\n \\"architecture\\": \\"x86_64\\",\\n \\"authoritative-source-url\\": \\"registry.access.redhat.com\\",\\n \\"build-date\\": \\"2018-12-04T06:23:56.020659\\",\\n \\"com.redhat.build-host\\": \\"cpt-0007.osbs.prod.upshift.rdu2.redhat.com\\",\\n \\"com.redhat.component\\": \\"openshift-enterprise-haproxy-router-container\\",\\n \\"description\\": \\"This is a component of OpenShift Container Platform and contains an HAProxy instance that automatically exposes services within the cluster through routes, and offers TLS termination, reencryption, or SNI-passthrough on ports 80 and 443.\\",\\n \\"distribution-scope\\": \\"public\\",\\n \\"io.k8s.description\\": \\"This is a component of OpenShift Container Platform and contains an HAProxy instance that automatically exposes services within the cluster through routes, and offers TLS termination, reencryption, or SNI-passthrough on ports 80 and 443.\\",\\n \\"io.k8s.display-name\\": \\"OpenShift Container Platform HAProxy Router\\",\\n \\"io.openshift.tags\\": \\"openshift,router,haproxy\\",\\n \\"name\\": \\"openshift3/ose-haproxy-router\\",\\n \\"release\\": \\"2\\",\\n \\"summary\\": \\"Provides the latest release of Red Hat Enterprise Linux 7 in a fully featured and supported base image.\\",\\n \\"url\\": \\"https://access.redhat.com/containers/#/registry.access.redhat.com/openshift3/ose-haproxy-router/images/v3.11.51-2\\",\\n \\"vcs-ref\\": \\"8fdaf14f66f4ebd3d2d8635688b054d573e5fdd2\\",\\n \\"vcs-type\\": \\"git\\",\\n \\"vendor\\": \\"Red Hat, Inc.\\",\\n \\"version\\": \\"v3.11.51\\"\\n },\\n \\"Architecture\\": \\"amd64\\",\\n \\"Os\\": \\"linux\\",\\n \\"Layers\\": [\\n \\"sha256:23113ae36f8e9d98b1423e44673979132dec59db2805e473e931d83548b0be82\\",\\n \\"sha256:d134b18b98b0d113b7b1194a60efceaa2c06eff41386d6c14b0e44bfe557eee8\\",\\n \\"sha256:e08cb06c2905b3fe45884de4a320ba7becbc2ee0518067440386f516319cf679\\",\\n \\"sha256:6aa3d7603262f3694717e953867d5a4c888b0060409fcd0d2b75bc4b4c512f66\\",\\n \\"sha256:e16196b37818f13f67a526af79002aec93ce7d2bf66b5ea48b69ddcf519e6444\\"\\n ]\\n}", "cmd": " timeout 10 skopeo inspect --tls-verify=true --creds=rhel_scanplus:u1DVDdwQU1tzdE docker://registry.redhat.io/openshift3/ose-haproxy-router:v3.11", "rc": 0, "start": "2019-01-09 15:57:24.763512", "stderr": "", "delta": "0:00:04.051168", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "_raw_params": " timeout 10 skopeo inspect --tls-verify=true --creds=rhel_scanplus:u1DVDdwQU1tzdE docker://registry.redhat.io/openshift3/ose-haproxy-router:v3.11", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:57:32.468335", "stdout": "{\\n \\"Name\\": \\"registry.redhat.io/openshift3/ose-deployer\\",\\n \\"Digest\\": \\"sha256:c16be3658755a19ba6ebb7af0b2890ba264106ea9013eb0c2c3c71c8856959bb\\",\\n \\"RepoTags\\": [\\n \\"v3.3.1.20\\",\\n \\"v3.3.0.34-1\\",\\n \\"v3.3.1.25\\",\\n \\"v3.5.5.5\\",\\n \\"v3.8.36\\",\\n \\"v3.10.72\\",\\n \\"v3.3.1.4-1\\",\\n \\"v3.7.9-21\\",\\n \\"v3.10.66\\",\\n \\"v3.2.1.28-3\\",\\n \\"v3.4.1.24\\",\\n \\"v3.3.1.19-2\\",\\n \\"v3.7.64-2\\",\\n \\"v3.3\\",\\n \\"v3.7.62\\",\\n \\"v3.1\\",\\n \\"v3.3.1.7-0\\",\\n \\"v3.7\\",\\n \\"v3.6\\",\\n \\"v3.5\\",\\n \\"v3.7.64\\",\\n \\"v3.5.5.31.80\\",\\n \\"v3.9\\",\\n \\"v3.4.1.44\\",\\n \\"v3.3.1.38-2\\",\\n \\"v3.7.23-3\\",\\n \\"v3.1.1.10\\",\\n \\"v3.9.51-2\\",\\n \\"v3.3.1.11-2\\",\\n \\"v3.4.1.44.38\\",\\n \\"v3.6.173.0.124-2\\",\\n \\"v3.11.51\\",\\n \\"v3.7.54-1\\",\\n \\"v3.9.43\\",\\n \\"v3.8.37\\",\\n \\"v3.9.40\\",\\n \\"v3.11.51-2\\",\\n \\"v3.5.5.31.47-10\\",\\n \\"v3.4.1.44.17-14\\",\\n \\"v3.5.5.26-2\\",\\n \\"v3.6.173.0.113-2\\",\\n \\"v3.2.1.21-1\\",\\n \\"v3.6.173.0.96-2\\",\\n \\"v3.5.5.31.48-3\\",\\n \\"v3.2.0.44-2\\",\\n \\"v3.6.173.0.123-2\\",\\n \\"v3.7.9\\",\\n \\"v3.4.1.44.38-11\\",\\n \\"v3.2.1.9\\",\\n \\"v3.6.173.0.49\\",\\n \\"v3.7.72\\",\\n \\"v3.4.1.5-2\\",\\n \\"v3.5.5.24\\",\\n \\"v3.6.173.0.117-1\\",\\n \\"v3.6.173.0.5-4\\",\\n \\"v3.6.173.0.5-5\\",\\n \\"v3.3.1.3-1\\",\\n \\"v3.4.1.2-2\\",\\n \\"v3.8.44-2\\",\\n \\"v3.6.173.0.5-2\\",\\n \\"v3.3.1.46.11\\",\\n \\"v3.6.173.0.140-2\\",\\n \\"v3.2.1.17-1\\",\\n \\"v3.4.1.12\\",\\n \\"v3.2.1.26-2\\",\\n \\"v3.3.1.46.45\\",\\n \\"v3.9.57\\",\\n \\"v3.2.0.20\\",\\n \\"v3.9.51\\",\\n \\"v3.5.5.31.66-2\\",\\n \\"v3.7.42-2\\",\\n \\"v3.4.1.16-2\\",\\n \\"v3.1.1.6-18\\",\\n \\"v3.4.1.24-3\\",\\n \\"v3.1.1.6-16\\",\\n \\"v3.1.1.6-14\\",\\n \\"v3.1.1.6-15\\",\\n \\"v3.2.0.46\\",\\n \\"v3.1.1.6-13\\",\\n \\"v3.7.44-2\\",\\n \\"v3.5.5.5-2\\",\\n \\"v3.6.173.0.63\\",\\n \\"v3.8\\",\\n \\"v3.10.66-4\\",\\n \\"v3.9.25-1\\",\\n \\"v3.4\\",\\n \\"v3.1.1.8-1\\",\\n \\"v3.3.1.17-2\\",\\n \\"v3.5.5.31.67\\",\\n \\"v3.3.0.32-2\\",\\n \\"v3.5.5.15\\",\\n \\"v3.6.173.0.83-4\\",\\n \\"v3.7.23-10\\",\\n \\"v3.1.1.7-0\\",\\n \\"v3.7.46\\",\\n \\"v3.2.0.20-3\\",\\n \\"v3.7.42\\",\\n \\"v3.3.1.46.39\\",\\n \\"v3.3.1.46.39-3\\",\\n \\"v3.6.173.0.5\\",\\n \\"v3.6.173.0.140\\",\\n \\"v3.4.1.44.53\\",\\n \\"v3.4.1.44.52\\",\\n \\"v3.10.83\\",\\n \\"v3.4.1.44.57\\",\\n \\"v3.4.1.44.52-3\\",\\n \\"v3.9.31-2\\",\\n \\"v3.5.5.31\\",\\n \\"v3.4.1.37-2\\",\\n \\"v3.2\\",\\n \\"v3.2.1.15\\",\\n \\"v3.7.61\\",\\n \\"v3.6.173.0.129-2\\",\\n \\"v3.2.1.4-1\\",\\n \\"v3.11.43\\",\\n \\"v3.2.1.30-3\\",\\n \\"v3.4.1.18-3\\",\\n \\"v3.1.1.10-2\\",\\n \\"v3.9.33-3\\",\\n \\"v3.4.0.40\\",\\n \\"v3.3.1.20-4\\",\\n \\"v3.4.1.44.11-2\\",\\n \\"v3.4.1.37\\",\\n \\"v3.2.1.9-3\\",\\n \\"v3.8.44\\",\\n \\"v3.10.45-6\\",\\n \\"v3.3.1.14\\",\\n \\"v3.10.14\\",\\n \\"v3.7.52\\",\\n \\"v3.10.72-3\\",\\n \\"v3.2.1.7-1\\",\\n \\"v3.7.57\\",\\n \\"v3.1.1.11\\",\\n \\"v3.10.72-5\\",\\n \\"v3.11.16-3\\",\\n \\"v3.6.173.0.130\\",\\n \\"v3.8.37-2\\",\\n \\"v3.9.40-2\\",\\n \\"v3.2.1.21\\",\\n \\"v3.2.1.34-5\\",\\n \\"v3.6.173.0.96-10\\",\\n \\"v3.1.0.4\\",\\n \\"v3.6.173.0.123\\",\\n \\"v3.2.1.7\\",\\n \\"v3.2.1.34-3\\",\\n \\"v3.2.1.4\\",\\n \\"v3.10.45-2\\",\\n \\"v3.1.0.4-2\\",\\n \\"v3.1.0.4-1\\",\\n \\"v3.3.1.5-2\\",\\n \\"v3.4.0.40-1\\",\\n \\"v3.3.0.35-1\\",\\n \\"v3.5.5.8\\",\\n \\"v3.2.1.31-4\\",\\n \\"v3.5.5.31.48\\",\\n \\"v3.3.1.17-4\\",\\n \\"v3.4.1.44-2\\",\\n \\"v3.5.5.31.47\\",\\n \\"v3.5.5.31.36-4\\",\\n \\"v3.2.0.46-2\\",\\n \\"v3.4.0.39\\",\\n \\"v3.6.173.0.130-1\\",\\n \\"v3.6.173.0.83\\",\\n \\"v3.2.1.31\\",\\n \\"v3.6.173.0.21\\",\\n \\"v3.1.1.6-3\\",\\n \\"v3.3.1.14-1\\",\\n \\"v3.1.1.11-1\\",\\n \\"v3.4.0.39-2\\",\\n \\"v3.9.43-2\\",\\n \\"v3.3.1.4\\",\\n \\"v3.1.1.11-2\\",\\n \\"v3.11.16\\",\\n \\"v3.6.173.0.124\\",\\n \\"v3.6.173.0.126\\",\\n \\"v3.6.173.0.129\\",\\n \\"v3.4.1.7\\",\\n \\"v3.4.1.5\\",\\n \\"v3.7.23\\",\\n \\"v3.7.61-2\\",\\n \\"v3.1.1.6-10\\",\\n \\"v3.2.1.13-1\\",\\n \\"v3.4.1.44.17\\",\\n \\"v3.4.1.7-2\\",\\n \\"v3.6.173.0.112-3\\",\\n \\"v3.4.1.44.11\\",\\n \\"v3.3.1.11\\",\\n \\"v3.1.1.6-12\\",\\n \\"v3.1.1.11-4\\",\\n \\"v3.5.5.26\\",\\n \\"v3.7.14\\",\\n \\"v3.3.1.17\\",\\n \\"v3.0.1.0\\",\\n \\"v3.4.1.18\\",\\n \\"v3.2.0.44\\",\\n \\"v3.3.1.38\\",\\n \\"v3.6.173.0.128\\",\\n \\"v3.5.5.24-2\\",\\n \\"v3.4.1.16\\",\\n \\"v3.10.34-3\\",\\n \\"v3.4.1.10\\",\\n \\"v3.10.34\\",\\n \\"v3.5.5.31.36\\",\\n \\"v3.4.1.33\\",\\n \\"v3.3.1.25-3\\",\\n \\"v3.6.173.0.96\\",\\n \\"v3.4.1.44.26-4\\",\\n \\"v3.7.62-2\\",\\n \\"v3.2.1.17\\",\\n \\"v3.5.5.31.80-4\\",\\n \\"v3.6.173.0.117\\",\\n \\"v3.6.173.0.112\\",\\n \\"v3.6.173.0.113\\",\\n \\"v3.9.57-2\\",\\n \\"v3.2.1.13\\",\\n \\"v3.1.1.8\\",\\n \\"v3.9.41\\",\\n \\"v3.9.41-2\\",\\n \\"v3.2.1.31-2\\",\\n \\"v3.5.5.31-2\\",\\n \\"v3.1.1.7\\",\\n \\"v3.7.72-2\\",\\n \\"v3.5.5.8-3\\",\\n \\"v3.4.1.12-3\\",\\n \\"v3.3.1.19\\",\\n \\"v3.7.46-1\\",\\n \\"v3.4.1.44.26\\",\\n \\"v3.9.27-1\\",\\n \\"v3.5.5.31.67-2\\",\\n \\"v3.10.14-10\\",\\n \\"v3.2.1.1-2\\",\\n \\"v3.10.14-13\\",\\n \\"v3.4.1.10-3\\",\\n \\"v3.11.43-2\\",\\n \\"v3.7.54\\",\\n \\"v3.3.1.35-2\\",\\n \\"v3.6.173.0.21-17\\",\\n \\"v3.10.45\\",\\n \\"v3.5.5.31.66\\",\\n \\"v3.5.5.31.24-15\\",\\n \\"v3.5.5.31.24\\",\\n \\"v3.6.173.0.126-2\\",\\n \\"v3.2.1.15-1\\",\\n \\"v3.1.1.6\\",\\n \\"v3.0.0.1\\",\\n \\"v3.9.25\\",\\n \\"v3.9.14\\",\\n \\"v3.9.27\\",\\n \\"v3.8.36-4\\",\\n \\"v3.3.1.46.45-2\\",\\n \\"v3.2.1.26\\",\\n \\"v3.2.1.23\\",\\n \\"v3.5.5.31.48-10\\",\\n \\"v3.2.1.23-2\\",\\n \\"v3.6.173.0.49-4\\",\\n \\"v3.5.5.31.19-2\\",\\n \\"v3.2.1.28\\",\\n \\"v3.4.1.33-2\\",\\n \\"v3.7.57-3\\",\\n \\"v3.9.30-2\\",\\n \\"v3.11\\",\\n \\"v3.10\\",\\n \\"v3.3.1.46.11-3\\",\\n \\"v3.3.1.35\\",\\n \\"v3.4.1.44.57-2\\",\\n \\"v3.6.173.0.63-11\\",\\n \\"v3.10.14-8\\",\\n \\"v3.1.1.6-20\\",\\n \\"v3.7.52-1\\",\\n \\"v3.3.1.3\\",\\n \\"v3.3.1.5\\",\\n \\"v3.7.14-5\\",\\n \\"v3.3.1.7\\",\\n \\"v3.5.5.31.19\\",\\n \\"v3.2.1.34-20\\",\\n \\"v3.4.1.2\\",\\n \\"v3.9.14-2\\",\\n \\"v3.10.83-2\\",\\n \\"v3.2.1.1\\",\\n \\"v3.2.1.30\\",\\n \\"v3.9.33\\",\\n \\"v3.9.30\\",\\n \\"v3.9.31\\",\\n \\"v3.2.1.34\\",\\n \\"v3.0.2.0\\",\\n \\"v3.4.1.44.53-3\\",\\n \\"v3.0.0.0\\",\\n \\"v3.7.44\\",\\n \\"v3.3.0.34\\",\\n \\"v3.3.0.35\\",\\n \\"v3.6.173.0.128-2\\",\\n \\"v3.5.5.15-3\\",\\n \\"v3.3.0.32\\",\\n \\"latest\\"\\n ],\\n \\"Created\\": \\"2018-12-04T06:24:34.759143124Z\\",\\n \\"DockerVersion\\": \\"1.13.1\\",\\n \\"Labels\\": {\\n \\"License\\": \\"GPLv2+\\",\\n \\"architecture\\": \\"x86_64\\",\\n \\"authoritative-source-url\\": \\"registry.access.redhat.com\\",\\n \\"build-date\\": \\"2018-12-04T06:23:48.373128\\",\\n \\"com.redhat.build-host\\": \\"cpt-0010.osbs.prod.upshift.rdu2.redhat.com\\",\\n \\"com.redhat.component\\": \\"openshift-enterprise-deployer-container\\",\\n \\"description\\": \\"This is a component of OpenShift Container Platform and executes the user deployment process to roll out new containers. It may be used as a base image for building your own custom deployer image.\\",\\n \\"distribution-scope\\": \\"public\\",\\n \\"io.k8s.description\\": \\"This is a component of OpenShift Container Platform and executes the user deployment process to roll out new containers. It may be used as a base image for building your own custom deployer image.\\",\\n \\"io.k8s.display-name\\": \\"OpenShift Container Platform Deployer\\",\\n \\"io.openshift.tags\\": \\"openshift,deployer\\",\\n \\"name\\": \\"openshift3/ose-deployer\\",\\n \\"release\\": \\"2\\",\\n \\"summary\\": \\"Provides the latest release of Red Hat Enterprise Linux 7 in a fully featured and supported base image.\\",\\n \\"url\\": \\"https://access.redhat.com/containers/#/registry.access.redhat.com/openshift3/ose-deployer/images/v3.11.51-2\\",\\n \\"vcs-ref\\": \\"02cd6e242f60cfa1f362bdaff5ef4eafc2c7aab6\\",\\n \\"vcs-type\\": \\"git\\",\\n \\"vendor\\": \\"Red Hat, Inc.\\",\\n \\"version\\": \\"v3.11.51\\"\\n },\\n \\"Architecture\\": \\"amd64\\",\\n \\"Os\\": \\"linux\\",\\n \\"Layers\\": [\\n \\"sha256:23113ae36f8e9d98b1423e44673979132dec59db2805e473e931d83548b0be82\\",\\n \\"sha256:d134b18b98b0d113b7b1194a60efceaa2c06eff41386d6c14b0e44bfe557eee8\\",\\n \\"sha256:e08cb06c2905b3fe45884de4a320ba7becbc2ee0518067440386f516319cf679\\",\\n \\"sha256:6aa3d7603262f3694717e953867d5a4c888b0060409fcd0d2b75bc4b4c512f66\\",\\n \\"sha256:b8862eb7b6c269bddcb543b13a63998251ce3265f8c947bb5a0f37e5ac24f805\\"\\n ]\\n}", "cmd": " timeout 10 skopeo inspect --tls-verify=true --creds=rhel_scanplus:u1DVDdwQU1tzdE docker://registry.redhat.io/openshift3/ose-deployer:v3.11", "rc": 0, "start": "2019-01-09 15:57:28.944506", "stderr": "", "delta": "0:00:03.523829", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "_raw_params": " timeout 10 skopeo inspect --tls-verify=true --creds=rhel_scanplus:u1DVDdwQU1tzdE docker://registry.redhat.io/openshift3/ose-deployer:v3.11", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') CHECK [memory_availability : sp-os-master01.os.ad.scanplus.de] ************************************************************************************************************************************************************************************************************************************************************** changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "checks": { "disk_availability": {}, "docker_image_availability": { "changed": true }, "memory_availability": { "skipped": true, "skipped_reason": "Disabled by user request" } }, "playbook_context": "upgrade" } META: ran handlers PLAY [Verify upgrade can proceed on first master] *************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [fail] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_cluster.yml:14 Wednesday 09 January 2019 15:57:32 +0100 (0:00:22.214) 0:18:06.832 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fail] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_cluster.yml:23 Wednesday 09 January 2019 15:57:32 +0100 (0:00:00.111) 0:18:06.944 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_cluster.yml:32 Wednesday 09 January 2019 15:57:32 +0100 (0:00:00.123) 0:18:07.067 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fail] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_cluster.yml:36 Wednesday 09 January 2019 15:57:32 +0100 (0:00:00.109) 0:18:07.177 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_storage_glusterfs : set_fact] ******************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_config_facts.yml:2 Wednesday 09 January 2019 15:57:33 +0100 (0:00:00.113) 0:18:07.290 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_storage_glusterfs : Check for GlusterFS cluster health] ***************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/cluster_health.yml:4 Wednesday 09 January 2019 15:57:33 +0100 (0:00:00.199) 0:18:07.489 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_storage_glusterfs : set_fact] ******************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_registry_facts.yml:2 Wednesday 09 January 2019 15:57:33 +0100 (0:00:00.109) 0:18:07.599 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_storage_glusterfs : Check for GlusterFS cluster health] ***************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/cluster_health.yml:4 Wednesday 09 January 2019 15:57:33 +0100 (0:00:00.191) 0:18:07.790 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Verify master processes] ********************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [Read master storage backend setting] ********************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_cluster.yml:57 Wednesday 09 January 2019 15:57:33 +0100 (0:00:00.119) 0:18:07.909 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "kubernetesMasterConfig.apiServerArguments.storage-backend", "src": "/etc/origin/master/master-config.yaml", "backup": false, "update": false, "value": null, "backup_ext": ".20190109T155734", "curr_value_format": "yaml", "edits": null, "state": "list", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "list", "changed": false, "result": ["etcd3"]}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T155734", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": null, "index": null, "key": "kubernetesMasterConfig.apiServerArguments.storage-backend", "separator": ".", "src": "/etc/origin/master/master-config.yaml", "state": "list", "update": false, "value": null, "value_type": "" } }, "result": [ "etcd3" ], "state": "list" } TASK [fail] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_cluster.yml:64 Wednesday 09 January 2019 15:57:34 +0100 (0:00:00.774) 0:18:08.684 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [debug] **************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_cluster.yml:70 Wednesday 09 January 2019 15:57:34 +0100 (0:00:00.104) 0:18:08.788 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "msg": "Storage backend is set to etcd3" } TASK [openshift_facts] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_cluster.yml:73 Wednesday 09 January 2019 15:57:34 +0100 (0:00:00.140) 0:18:08.928 ***** Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "master", "selevel": null, "regexp": null, "src": null, "local_facts": {"ha": false}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"dns_ip": "172.30.80.240", "proxy_mode": "iptables", "nodename": "sp-os-master01.os.ad.scanplus.de", "bootstrapped": true, "sdn_mtu": "1450"}, "builddefaults": {"config": {"BuildDefaults": {"configuration": {"kind": "BuildDefaultsConfig", "resources": {"requests": {}, "limits": {}}, "env": [], "apiVersion": "v1"}}}}, "logging": {"elasticsearch": {"pvc": {}, "ops": {"pvc": {}}}}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "builddefaults", "logging", "cloudprovider", "master", "hosted", "docker", "buildoverrides"]}, "master": {"public_console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "api_port": "8443", "console_port": "8443", "loopback_user": "system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443", "api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "api_use_ssl": true, "console_path": "/console", "sdn_cluster_network_cidr": "172.18.0.0/17", "loopback_context_name": "default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master", "console_use_ssl": true, "console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "cluster_method": "native", "ha": false, "loopback_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "public_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "admission_plugin_config": {"BuildDefaults": {"configuration": {"kind": "BuildDefaultsConfig", "resources": {"requests": {}, "limits": {}}, "env": [], "apiVersion": "v1"}}, "BuildOverrides": {"configuration": {"kind": "BuildOverridesConfig", "apiVersion": "v1"}}, "openshift.io/ImagePolicy": {"configuration": {"kind": "ImagePolicyConfig", "executionRules": [{"skipOnResolutionFailure": true, "matchImageAnnotations": [{"key": "images.openshift.io/deny-execution", "value": "true"}], "reject": true, "name": "execution-denied", "onResources": [{"resource": "pods"}, {"resource": "builds"}]}], "apiVersion": "v1"}}}, "named_certificates": [{"certfile": "/etc/origin/master/named_certificates/cert.crt", "keyfile": "/etc/origin/master/named_certificates/cert.key", "names": ["sp-os-master01.os.ad.scanplus.de"], "cafile": "/etc/origin/master/named_certificates/ca.crt"}], "manage_htpasswd": true, "loopback_cluster_name": "sp-os-master01-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "controllers_port": "8444", "session_name": "ssn"}, "common": {"is_etcd_system_container": false, "ip": "172.30.80.240", "dns_domain": "cluster.local", "is_master_system_container": false, "public_ip": "172.30.80.240", "public_hostname": "sp-os-master01.os.ad.scanplus.de", "internal_hostnames": ["kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "etcd_runtime": "host", "rolling_restart_mode": "services", "hostname": "sp-os-master01.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "is_openvswitch_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "kube_svc_ip": "172.18.128.1", "config_base": "/etc/origin", "all_hostnames": ["kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "is_containerized": false, "no_proxy_etcd_host_ips": "172.30.80.240", "raw_hostname": "sp-os-master01.os.ad.scanplus.de", "portal_net": "172.18.128.0/17", "deployment_type": "openshift-enterprise"}, "hosted": {"templates": {"kubeconfig": "/tmp/openshift-ansible-DNTbe3/admin.kubeconfig"}, "routers": [{"name": "router", "certificate": "{{ openshift_hosted_router_certificate | default({}) }}", "replicas": "{{ replicas | default(1) }}", "serviceaccount": "router", "namespace": "default", "stats_port": 1936, "edits": "{{ openshift_hosted_router_edits }}", "images": "{{ openshift_hosted_router_image | default(None) }}", "selector": "{{ openshift_hosted_router_selector | default(None) }}", "ports": ["80:80", "443:443"]}], "infra": {"selector": "region=infra"}, "registry": {"force": [false], "name": "docker-registry", "serviceaccount": "registry", "edits": [{"action": "put", "value": {"updatePeriodSeconds": 1, "timeoutSeconds": 600, "maxSurge": "25%", "intervalSeconds": 1, "maxUnavailable": "25%"}, "key": "spec.strategy.rollingParams"}], "selector": "region=infra", "cert": {"expire": {"days": 730}}, "env": {"vars": {}}, "volumes": [], "registryurl": "openshift3/ose-${component}:${version}", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}"}, "router": {"certificate": {"certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key", "cafile": "/etc/origin/master/ca.crt"}, "create_certificate": true, "image": "openshift3/ose-${component}:${version}", "selector": "region=infra", "edits": [{"action": "put", "value": 1, "key": "spec.strategy.rollingParams.intervalSeconds"}, {"action": "put", "value": 1, "key": "spec.strategy.rollingParams.updatePeriodSeconds"}, {"action": "put", "value": 21600, "key": "spec.strategy.activeDeadlineSeconds"}], "registryurl": "openshift3/ose-${component}:${version}", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}"}, "docker": {"registry": {"insecure": {"default": "{{ openshift_docker_hosted_registry_insecure | default(False) }}"}}}, "wfp": {"rc": {"phase": {"msg": "All items completed", "changed": true, "results": [{"_ansible_parsed": true, "stderr_lines": [], "rc": 0, "_ansible_item_result": true, "end": "2018-01-31 14:15:11.698797", "_ansible_no_log": false, "stdout": "Complete", "cmd": ["oc", "get", "replicationcontroller", "router-1", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .metadata.annotations.openshift\\\\.io/deployment\\\\.phase }"], "attempts": 1, "item": [{"name": "router", "certificate": {"keyfile": "/etc/origin/master/openshift-router.key", "certfile": "/etc/origin/master/openshift-router.crt", "cafile": "/etc/origin/master/ca.crt"}, "replicas": "2", "namespace": "default", "serviceaccount": "router", "stats_port": 1936, "edits": [{"action": "put", "value": 1, "key": "spec.strategy.rollingParams.intervalSeconds"}, {"action": "put", "value": 1, "key": "spec.strategy.rollingParams.updatePeriodSeconds"}, {"action": "put", "value": 21600, "key": "spec.strategy.activeDeadlineSeconds"}], "images": "openshift3/ose-${component}:${version}", "selector": "region=infra", "ports": ["80:80", "443:443"]}, {"_ansible_parsed": true, "stderr_lines": [], "_ansible_item_result": true, "end": "2018-01-31 14:15:11.096068", "_ansible_no_log": false, "stdout": "1", "cmd": ["oc", "get", "deploymentconfig", "router", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .status.latestVersion }"], "rc": 0, "item": {"name": "router", "certificate": {"certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key", "cafile": "/etc/origin/master/ca.crt"}, "replicas": "2", "namespace": "default", "serviceaccount": "router", "selector": "region=infra", "edits": [{"action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600}], "images": "openshift3/ose-${component}:${version}", "stats_port": 1936, "ports": ["80:80", "443:443"]}, "delta": "0:00:00.196315", "stderr": "", "changed": true, "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc get deploymentconfig router --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath=\'{ .status.latestVersion }\'", "removes": null, "creates": null, "chdir": null, "stdin": null}}, "stdout_lines": ["1"], "start": "2018-01-31 14:15:10.899753", "_ansible_ignore_errors": null, "failed": false}], "delta": "0:00:00.199963", "stderr": "", "changed": true, "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": false, "_raw_params": "oc get replicationcontroller router-1 --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath=\'{ .metadata.annotations.openshift\\\\.io/deployment\\\\.phase }\'", "removes": null, "warn": true, "chdir": null, "stdin": null}}, "stdout_lines": ["Complete"], "failed_when_result": false, "start": "2018-01-31 14:15:11.498834", "_ansible_ignore_errors": null, "failed": false}]}}}}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "buildoverrides": {"config": {"BuildOverrides": {"configuration": {"kind": "BuildOverridesConfig", "apiVersion": "v1"}}}}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "builddefaults": { "config": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } } } }, "buildoverrides": { "config": { "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } } } }, "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-master01.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "ip": "172.30.80.240", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "no_proxy_etcd_host_ips": "172.30.80.240", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-master01.os.ad.scanplus.de", "public_ip": "172.30.80.240", "raw_hostname": "sp-os-master01.os.ad.scanplus.de", "rolling_restart_mode": "services", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "builddefaults", "logging", "cloudprovider", "master", "hosted", "docker", "buildoverrides" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "hosted": { "docker": { "registry": { "insecure": { "default": "{{ openshift_docker_hosted_registry_insecure | default(False) }}" } } }, "infra": { "selector": "region=infra" }, "registry": { "cert": { "expire": { "days": 730 } }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams", "value": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 } } ], "env": { "vars": {} }, "force": [ false ], "name": "docker-registry", "registryurl": "openshift3/ose-${component}:${version}", "selector": "region=infra", "serviceaccount": "registry", "volumes": [], "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}" }, "router": { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "create_certificate": true, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "image": "openshift3/ose-${component}:${version}", "registryurl": "openshift3/ose-${component}:${version}", "selector": "region=infra", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}" }, "routers": [ { "certificate": "{{ openshift_hosted_router_certificate | default({}) }}", "edits": "{{ openshift_hosted_router_edits }}", "images": "{{ openshift_hosted_router_image | default(None) }}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "{{ replicas | default(1) }}", "selector": "{{ openshift_hosted_router_selector | default(None) }}", "serviceaccount": "router", "stats_port": 1936 } ], "templates": { "kubeconfig": "/tmp/openshift-ansible-DNTbe3/admin.kubeconfig" }, "wfp": { "rc": { "phase": { "changed": true, "msg": "All items completed", "results": [ { "_ansible_ignore_errors": null, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "attempts": 1, "changed": true, "cmd": [ "oc", "get", "replicationcontroller", "router-1", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .metadata.annotations.openshift\\.io/deployment\\.phase }" ], "delta": "0:00:00.199963", "end": "2018-01-31 14:15:11.698797", "failed": false, "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc get replicationcontroller router-1 --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath='{ .metadata.annotations.openshift\\.io/deployment\\.phase }'", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": [ { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "images": "openshift3/ose-${component}:${version}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "2", "selector": "region=infra", "serviceaccount": "router", "stats_port": 1936 }, { "_ansible_ignore_errors": null, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": true, "cmd": [ "oc", "get", "deploymentconfig", "router", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .status.latestVersion }" ], "delta": "0:00:00.196315", "end": "2018-01-31 14:15:11.096068", "failed": false, "invocation": { "module_args": { "_raw_params": "oc get deploymentconfig router --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath='{ .status.latestVersion }'", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "images": "openshift3/ose-${component}:${version}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "2", "selector": "region=infra", "serviceaccount": "router", "stats_port": 1936 }, "rc": 0, "start": "2018-01-31 14:15:10.899753", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": [ "1" ] } ], "rc": 0, "start": "2018-01-31 14:15:11.498834", "stderr": "", "stderr_lines": [], "stdout": "Complete", "stdout_lines": [ "Complete" ] } ] } } } }, "logging": { "elasticsearch": { "ops": { "pvc": {} }, "pvc": {} } }, "master": { "admission_plugin_config": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } }, "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } }, "openshift.io/ImagePolicy": { "configuration": { "apiVersion": "v1", "executionRules": [ { "matchImageAnnotations": [ { "key": "images.openshift.io/deny-execution", "value": "true" } ], "name": "execution-denied", "onResources": [ { "resource": "pods" }, { "resource": "builds" } ], "reject": true, "skipOnResolutionFailure": true } ], "kind": "ImagePolicyConfig" } } }, "api_port": "8443", "api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "cluster_method": "native", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "ha": false, "loopback_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-master01-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443", "manage_htpasswd": true, "named_certificates": [ { "cafile": "/etc/origin/master/named_certificates/ca.crt", "certfile": "/etc/origin/master/named_certificates/cert.crt", "keyfile": "/etc/origin/master/named_certificates/cert.key", "names": [ "sp-os-master01.os.ad.scanplus.de" ] } ], "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "sdn_cluster_network_cidr": "172.18.0.0/17", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.30.80.240", "nodename": "sp-os-master01.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "ha": false }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "master", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } META: ran handlers META: ran handlers PLAY [Verify masters are already upgraded] ********************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [fail] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/config.yml:59 Wednesday 09 January 2019 15:57:35 +0100 (0:00:01.073) 0:18:10.002 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Validate configuration for rolling restart] *************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [fail] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/validate_restart.yml:7 Wednesday 09 January 2019 15:57:35 +0100 (0:00:00.124) 0:18:10.126 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Create temp file on localhost] **************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [command] ************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/validate_restart.yml:19 Wednesday 09 January 2019 15:57:35 +0100 (0:00:00.063) 0:18:10.189 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c '/usr/bin/python2 && sleep 0' ok: [localhost -> localhost] => { "changed": false, "cmd": [ "mktemp" ], "delta": "0:00:00.003120", "end": "2019-01-09 15:57:36.104766", "invocation": { "module_args": { "_raw_params": "mktemp", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:57:36.101646", "stderr": "", "stderr_lines": [], "stdout": "/tmp/tmp.sclU9ATs6j", "stdout_lines": [ "/tmp/tmp.sclU9ATs6j" ] } META: ran handlers META: ran handlers PLAY [Check if temp file exists on any masters] ***************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [stat] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/validate_restart.yml:26 Wednesday 09 January 2019 15:57:36 +0100 (0:00:00.262) 0:18:10.452 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/tmp/tmp.sclU9ATs6j", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1547045856.1033802, "block_size": 4096, "inode": 660254, "isgid": false, "size": 0, "wgrp": false, "executable": false, "isuid": false, "readable": true, "isreg": true, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 0, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": false, "gr_name": "root", "path": "/tmp/tmp.sclU9ATs6j", "xusr": false, "atime": 1547045856.1033802, "isdir": false, "ctime": 1547045856.1033802, "isblk": false, "xgrp": false, "dev": 64769, "roth": false, "isfifo": false, "mode": "0600", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/tmp/tmp.sclU9ATs6j" } }, "stat": { "atime": 1547045856.1033802, "block_size": 4096, "blocks": 0, "ctime": 1547045856.1033802, "dev": 64769, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 660254, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1547045856.1033802, "nlink": 1, "path": "/tmp/tmp.sclU9ATs6j", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } META: ran handlers META: ran handlers PLAY [Cleanup temp file on localhost] *************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [file] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/validate_restart.yml:40 Wednesday 09 January 2019 15:57:36 +0100 (0:00:00.223) 0:18:10.675 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root <127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 && sleep 0' ok: [localhost] => { "changed": false, "diff": { "after": { "path": "/tmp/tmp.sclU9ATs6j", "state": "absent" }, "before": { "path": "/tmp/tmp.sclU9ATs6j", "state": "file" } }, "invocation": { "module_args": { "_diff_peek": null, "_original_basename": null, "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "follow": true, "force": false, "group": null, "mode": null, "owner": null, "path": "/tmp/tmp.sclU9ATs6j", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "absent", "unsafe_writes": null } }, "path": "/tmp/tmp.sclU9ATs6j", "state": "absent" } META: ran handlers META: ran handlers PLAY [Warn if restarting the system where ansible is running] *************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [pause] **************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/validate_restart.yml:47 Wednesday 09 January 2019 15:57:36 +0100 (0:00:00.270) 0:18:10.945 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/validate_restart.yml:59 Wednesday 09 January 2019 15:57:36 +0100 (0:00:00.117) 0:18:11.063 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Verify upgrade targets] *********************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [Fail when OpenShift is not installed] ********************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml:2 Wednesday 09 January 2019 15:57:36 +0100 (0:00:00.126) 0:18:11.189 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [container_runtime : Create credentials for oreg_url] ****************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/registry_auth.yml:5 Wednesday 09 January 2019 15:57:37 +0100 (0:00:00.110) 0:18:11.299 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/root/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/root/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } TASK [container_runtime : Create for any additional registries] ************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/registry_auth.yml:22 Wednesday 09 January 2019 15:57:41 +0100 (0:00:04.065) 0:18:15.365 ***** TASK [Verify containers are available for upgrade] ************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml:13 Wednesday 09 January 2019 15:57:41 +0100 (0:00:00.115) 0:18:15.480 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Check latest available OpenShift RPM version] ************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml:21 Wednesday 09 January 2019 15:57:41 +0100 (0:00:00.135) 0:18:15.615 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/repoquery.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"retries": 4, "verbose": false, "name": "atomic-openshift-3.11*", "ignore_excluders": true, "query_type": "repos", "retry_interval": 5, "match_version": null, "state": "list", "show_duplicates": false}}, "state": "list", "changed": false, "check_mode": false, "results": {"package_found": true, "cmd": "/usr/bin/repoquery --plugins --quiet --pkgnarrow=repos --queryformat=%{version}|%{release}|%{arch}|%{repo}|%{version}-%{release} --config=/tmp/tmpJWEA_F atomic-openshift-3.11*", "returncode": 0, "package_name": "atomic-openshift-3.11*", "versions": {"latest_full": "3.11.51-1.git.0.1560686.el7", "available_versions": ["3.11.51"], "available_versions_full": ["3.11.51-1.git.0.1560686.el7"], "latest": "3.11.51"}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "check_mode": false, "invocation": { "module_args": { "ignore_excluders": true, "match_version": null, "name": "atomic-openshift-3.11*", "query_type": "repos", "retries": 4, "retry_interval": 5, "show_duplicates": false, "state": "list", "verbose": false } }, "results": { "cmd": "/usr/bin/repoquery --plugins --quiet --pkgnarrow=repos --queryformat=%{version}|%{release}|%{arch}|%{repo}|%{version}-%{release} --config=/tmp/tmpJWEA_F atomic-openshift-3.11*", "package_found": true, "package_name": "atomic-openshift-3.11*", "returncode": 0, "versions": { "available_versions": [ "3.11.51" ], "available_versions_full": [ "3.11.51-1.git.0.1560686.el7" ], "latest": "3.11.51", "latest_full": "3.11.51-1.git.0.1560686.el7" } }, "state": "list" } TASK [Fail when unable to determine available OpenShift RPM version] ******************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml:27 Wednesday 09 January 2019 15:57:50 +0100 (0:00:08.803) 0:18:24.419 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Set fact avail_openshift_version] ************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml:33 Wednesday 09 January 2019 15:57:50 +0100 (0:00:00.120) 0:18:24.539 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "avail_openshift_version": "3.11.51-1.git.0.1560686.el7" }, "changed": false } TASK [Set openshift_pkg_version when not specified] ************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml:36 Wednesday 09 January 2019 15:57:50 +0100 (0:00:00.150) 0:18:24.690 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Verify OpenShift RPMs are available for upgrade] ********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml:41 Wednesday 09 January 2019 15:57:50 +0100 (0:00:00.118) 0:18:24.809 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Fail when openshift version does not meet minimum requirement for Origin upgrade] ************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml:47 Wednesday 09 January 2019 15:57:50 +0100 (0:00:00.120) 0:18:24.930 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Verify docker upgrade targets] **************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [container_runtime : set_fact] ***************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/docker_upgrade_check.yml:6 Wednesday 09 January 2019 15:57:50 +0100 (0:00:00.120) 0:18:25.051 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "docker_upgrade": true }, "changed": false } TASK [container_runtime : Check if Docker is installed] ********************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/docker_upgrade_check.yml:10 Wednesday 09 January 2019 15:57:50 +0100 (0:00:00.145) 0:18:25.196 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:57:51.185255", "stdout": "docker-1.13.1-74.git6e3bb8e.el7.x86_64", "cmd": ["rpm", "-q", "docker"], "rc": 0, "start": "2019-01-09 15:57:51.146114", "stderr": "", "delta": "0:00:00.039141", "invocation": {"module_args": {"warn": false, "executable": null, "_uses_shell": false, "_raw_params": "rpm -q docker", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "cmd": [ "rpm", "-q", "docker" ], "delta": "0:00:00.039141", "end": "2019-01-09 15:57:51.185255", "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "rpm -q docker", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": false } }, "rc": 0, "start": "2019-01-09 15:57:51.146114", "stderr": "", "stderr_lines": [], "stdout": "docker-1.13.1-74.git6e3bb8e.el7.x86_64", "stdout_lines": [ "docker-1.13.1-74.git6e3bb8e.el7.x86_64" ] } TASK [container_runtime : Get current version of Docker] ******************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/docker_upgrade_check.yml:18 Wednesday 09 January 2019 15:57:51 +0100 (0:00:00.331) 0:18:25.527 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:00.100472", "stdout": "1.13.1", "cmd": ["repoquery", "--plugins", "--installed", "--qf", "%{version}", "docker"], "rc": 0, "start": "2019-01-09 15:57:51.479193", "stderr": "", "delta": "0:00:08.621279", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "repoquery --plugins --installed --qf \'%{version}\' docker", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "cmd": [ "repoquery", "--plugins", "--installed", "--qf", "%{version}", "docker" ], "delta": "0:00:08.621279", "end": "2019-01-09 15:58:00.100472", "invocation": { "module_args": { "_raw_params": "repoquery --plugins --installed --qf '%{version}' docker", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:57:51.479193", "stderr": "", "stderr_lines": [], "stdout": "1.13.1", "stdout_lines": [ "1.13.1" ] } TASK [container_runtime : Get latest available version of Docker] *********************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/docker_upgrade_check.yml:26 Wednesday 09 January 2019 15:58:00 +0100 (0:00:08.919) 0:18:34.447 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:05.413507", "stdout": "1.13.1", "cmd": ["repoquery", "--plugins", "--qf", "%{version}", "docker"], "rc": 0, "start": "2019-01-09 15:58:00.399845", "stderr": "", "delta": "0:00:05.013662", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "repoquery --plugins --qf \'%{version}\' \\"docker\\"", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "cmd": [ "repoquery", "--plugins", "--qf", "%{version}", "docker" ], "delta": "0:00:05.013662", "end": "2019-01-09 15:58:05.413507", "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "repoquery --plugins --qf '%{version}' \"docker\"", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:00.399845", "stderr": "", "stderr_lines": [], "stdout": "1.13.1", "stdout_lines": [ "1.13.1" ] } TASK [container_runtime : Required docker version not available (non-atomic)] *********************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/docker_upgrade_check.yml:39 Wednesday 09 January 2019 15:58:05 +0100 (0:00:05.307) 0:18:39.755 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [container_runtime : set_fact] ***************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/docker_upgrade_check.yml:51 Wednesday 09 January 2019 15:58:05 +0100 (0:00:00.125) 0:18:39.880 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "l_docker_upgrade": false }, "changed": false } TASK [container_runtime : set_fact] ***************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/docker_upgrade_check.yml:55 Wednesday 09 January 2019 15:58:05 +0100 (0:00:00.158) 0:18:40.039 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "docker_version": "1.13.1" }, "changed": false } TASK [container_runtime : Flag for Docker upgrade if necessary] ************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/docker_upgrade_check.yml:62 Wednesday 09 January 2019 15:58:06 +0100 (0:00:00.300) 0:18:40.340 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [container_runtime : Determine available Docker] *********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/docker_upgrade_check.yml:71 Wednesday 09 January 2019 15:58:06 +0100 (0:00:00.122) 0:18:40.462 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [container_runtime : set_fact] ***************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/docker_upgrade_check.yml:76 Wednesday 09 January 2019 15:58:06 +0100 (0:00:00.231) 0:18:40.694 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [container_runtime : Required docker version is unavailable (atomic)] ************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/container_runtime/tasks/docker_upgrade_check.yml:80 Wednesday 09 January 2019 15:58:06 +0100 (0:00:00.106) 0:18:40.801 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Verify Requirements] ************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [Run variable sanity checks] ******************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/sanity_checks.yml:14 Wednesday 09 January 2019 15:58:06 +0100 (0:00:00.128) 0:18:40.930 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "msg": "Sanity Checks passed" } TASK [Validate openshift_node_groups and openshift_node_group_name] ********************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/init/sanity_checks.yml:18 Wednesday 09 January 2019 15:58:14 +0100 (0:00:07.359) 0:18:48.289 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "msg": "Node group checks passed" } META: ran handlers META: ran handlers PLAY [Verify Node NetworkManager] ******************************************************************************************************************************************************************************************************************************************************************************************* skipping: no hosts matched PLAY [Confirm upgrade will not make critical changes] *********************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [Confirm Reconcile Security Context Constraints will not change current SCCs] ****************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/v3_11/upgrade_control_plane_part2.yml:52 Wednesday 09 January 2019 15:58:16 +0100 (0:00:02.021) 0:18:50.311 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:16.484116", "stdout": "", "cmd": ["oc", "adm", "policy", "--config=/etc/origin/master/admin.kubeconfig", "reconcile-sccs", "--additive-only=true", "-o", "name"], "rc": 0, "start": "2019-01-09 15:58:16.256865", "stderr": "", "delta": "0:00:00.227251", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc adm policy --config=/etc/origin/master/admin.kubeconfig reconcile-sccs --additive-only=true -o name", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": true, "cmd": [ "oc", "adm", "policy", "--config=/etc/origin/master/admin.kubeconfig", "reconcile-sccs", "--additive-only=true", "-o", "name" ], "delta": "0:00:00.227251", "end": "2019-01-09 15:58:16.484116", "invocation": { "module_args": { "_raw_params": "oc adm policy --config=/etc/origin/master/admin.kubeconfig reconcile-sccs --additive-only=true -o name", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:16.256865", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } TASK [fail] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/v3_11/upgrade_control_plane_part2.yml:60 Wednesday 09 January 2019 15:58:16 +0100 (0:00:00.517) 0:18:50.828 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Ensure metrics-server is installed before upgrading the controller-manager] ******************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [metrics_server : Create temp directory for doing work in on target] *************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/metrics_server/tasks/main.yaml:2 Wednesday 09 January 2019 15:58:16 +0100 (0:00:00.127) 0:18:50.955 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [metrics_server : Create temp directory for all our templates] ********************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/metrics_server/tasks/main.yaml:7 Wednesday 09 January 2019 15:58:16 +0100 (0:00:00.119) 0:18:51.075 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [metrics_server : Create temp directory local on control node] ********************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/metrics_server/tasks/main.yaml:12 Wednesday 09 January 2019 15:58:16 +0100 (0:00:00.119) 0:18:51.195 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [metrics_server : Copy the admin client config(s)] ********************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/metrics_server/tasks/main.yaml:19 Wednesday 09 January 2019 15:58:17 +0100 (0:00:00.111) 0:18:51.306 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [metrics_server : Install metrics-server] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/metrics_server/tasks/main.yaml:25 Wednesday 09 January 2019 15:58:17 +0100 (0:00:00.103) 0:18:51.410 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [metrics_server : include_tasks] *************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/metrics_server/tasks/main.yaml:29 Wednesday 09 January 2019 15:58:17 +0100 (0:00:00.103) 0:18:51.513 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [metrics_server : Delete temp directory] ******************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/metrics_server/tasks/main.yaml:32 Wednesday 09 January 2019 15:58:17 +0100 (0:00:00.112) 0:18:51.625 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Configure components that must be available prior to upgrade] ********************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [openshift_sdn : Ensure project exists] ******************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sdn/tasks/main.yml:2 Wednesday 09 January 2019 15:58:17 +0100 (0:00:00.227) 0:18:51.853 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_project.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"admin_role": "admin", "display_name": null, "description": null, "admin": null, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "state": "present", "node_selector": [""], "debug": false, "name": "openshift-sdn"}}, "state": "present", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get namespace openshift-sdn -o json", "results": {"status": {"phase": "Active"}, "kind": "Namespace", "spec": {"finalizers": ["kubernetes"]}, "apiVersion": "v1", "metadata": {"name": "openshift-sdn", "resourceVersion": "93752667", "creationTimestamp": "2018-09-13T19:05:16Z", "annotations": {"openshift.io/sa.scc.supplemental-groups": "1000290000/10000", "openshift.io/display-name": "", "openshift.io/sa.scc.mcs": "s0:c17,c9", "openshift.io/description": "", "openshift.io/node-selector": "", "openshift.io/sa.scc.uid-range": "1000290000/10000"}, "selfLink": "/api/v1/namespaces/openshift-sdn", "uid": "f361c7b7-b787-11e8-9af4-005056aa3492"}}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "admin": null, "admin_role": "admin", "debug": false, "description": null, "display_name": null, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "openshift-sdn", "node_selector": [ "" ], "state": "present" } }, "results": { "cmd": "/usr/bin/oc get namespace openshift-sdn -o json", "results": { "apiVersion": "v1", "kind": "Namespace", "metadata": { "annotations": { "openshift.io/description": "", "openshift.io/display-name": "", "openshift.io/node-selector": "", "openshift.io/sa.scc.mcs": "s0:c17,c9", "openshift.io/sa.scc.supplemental-groups": "1000290000/10000", "openshift.io/sa.scc.uid-range": "1000290000/10000" }, "creationTimestamp": "2018-09-13T19:05:16Z", "name": "openshift-sdn", "resourceVersion": "93752667", "selfLink": "/api/v1/namespaces/openshift-sdn", "uid": "f361c7b7-b787-11e8-9af4-005056aa3492" }, "spec": { "finalizers": [ "kubernetes" ] }, "status": { "phase": "Active" } }, "returncode": 0 }, "state": "present" } TASK [openshift_sdn : Make temp directory for templates] ******************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sdn/tasks/main.yml:9 Wednesday 09 January 2019 15:58:18 +0100 (0:00:00.700) 0:18:52.553 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:18.509764", "stdout": "/tmp/ansible-GjAYpx", "cmd": ["mktemp", "-d", "/tmp/ansible-XXXXXX"], "rc": 0, "start": "2019-01-09 15:58:18.506855", "stderr": "", "delta": "0:00:00.002909", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "mktemp -d /tmp/ansible-XXXXXX", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/ansible-XXXXXX" ], "delta": "0:00:00.002909", "end": "2019-01-09 15:58:18.509764", "invocation": { "module_args": { "_raw_params": "mktemp -d /tmp/ansible-XXXXXX", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:18.506855", "stderr": "", "stderr_lines": [], "stdout": "/tmp/ansible-GjAYpx", "stdout_lines": [ "/tmp/ansible-GjAYpx" ] } TASK [openshift_sdn : Copy templates to temp directory] ********************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_sdn/tasks/main.yml:14 Wednesday 09 January 2019 15:58:18 +0100 (0:00:00.307) 0:18:52.861 ***** ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547045898.7-90332451353127 `" && echo ansible-tmp-1547045898.7-90332451353127="` echo /root/.ansible/tmp/ansible-tmp-1547045898.7-90332451353127 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547045898.7-90332451353127=/root/.ansible/tmp/ansible-tmp-1547045898.7-90332451353127\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/tmp/ansible-GjAYpx/sdn-images.yaml", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\n', '') PUT /usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn-images.yaml TO /root/.ansible/tmp/ansible-tmp-1547045898.7-90332451353127/source SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r '[sp-os-master01.os.ad.scanplus.de]' (0, 'sftp> put /usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn-images.yaml /root/.ansible/tmp/ansible-tmp-1547045898.7-90332451353127/source\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1547045898.7-90332451353127/ /root/.ansible/tmp/ansible-tmp-1547045898.7-90332451353127/source && sleep 0'"'"'' (0, '', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"src": "/root/.ansible/tmp/ansible-tmp-1547045898.7-90332451353127/source", "changed": true, "group": "root", "uid": 0, "dest": "/tmp/ansible-GjAYpx/sdn-images.yaml", "checksum": "69aeb6f0ab2377990845b14dd15e22e6fa84ebcc", "md5sum": "ebae0d8c8106f9b2c076f9bb1f89c450", "owner": "root", "state": "file", "gid": 0, "secontext": "unconfined_u:object_r:admin_home_t:s0", "mode": "0644", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "sdn-images.yaml", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "dest": "/tmp/ansible-GjAYpx/sdn-images.yaml", "selevel": null, "regexp": null, "validate": null, "src": "/root/.ansible/tmp/ansible-tmp-1547045898.7-90332451353127/source", "checksum": "69aeb6f0ab2377990845b14dd15e22e6fa84ebcc", "seuser": null, "delimiter": null, "mode": null, "attributes": null, "backup": false}}, "size": 197}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547045898.7-90332451353127/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=/usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn-images.yaml) => { "changed": true, "checksum": "69aeb6f0ab2377990845b14dd15e22e6fa84ebcc", "dest": "/tmp/ansible-GjAYpx/sdn-images.yaml", "diff": [], "gid": 0, "group": "root", "invocation": { "module_args": { "_original_basename": "sdn-images.yaml", "attributes": null, "backup": false, "checksum": "69aeb6f0ab2377990845b14dd15e22e6fa84ebcc", "content": null, "delimiter": null, "dest": "/tmp/ansible-GjAYpx/sdn-images.yaml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": null, "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/root/.ansible/tmp/ansible-tmp-1547045898.7-90332451353127/source", "unsafe_writes": null, "validate": null } }, "item": "/usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn-images.yaml", "md5sum": "ebae0d8c8106f9b2c076f9bb1f89c450", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 197, "src": "/root/.ansible/tmp/ansible-tmp-1547045898.7-90332451353127/source", "state": "file", "uid": 0 } ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547045899.32-191334680399648 `" && echo ansible-tmp-1547045899.32-191334680399648="` echo /root/.ansible/tmp/ansible-tmp-1547045899.32-191334680399648 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547045899.32-191334680399648=/root/.ansible/tmp/ansible-tmp-1547045899.32-191334680399648\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/tmp/ansible-GjAYpx/sdn.yaml", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\n', '') PUT /usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn.yaml TO /root/.ansible/tmp/ansible-tmp-1547045899.32-191334680399648/source SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r '[sp-os-master01.os.ad.scanplus.de]' (0, 'sftp> put /usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn.yaml /root/.ansible/tmp/ansible-tmp-1547045899.32-191334680399648/source\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1547045899.32-191334680399648/ /root/.ansible/tmp/ansible-tmp-1547045899.32-191334680399648/source && sleep 0'"'"'' (0, '', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"src": "/root/.ansible/tmp/ansible-tmp-1547045899.32-191334680399648/source", "changed": true, "group": "root", "uid": 0, "dest": "/tmp/ansible-GjAYpx/sdn.yaml", "checksum": "589dc6c3b04b40a2ee786ff8bb365bfcc1066ae4", "md5sum": "281c47f93a3456cd229fe1bbf060b1bb", "owner": "root", "state": "file", "gid": 0, "secontext": "unconfined_u:object_r:admin_home_t:s0", "mode": "0644", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "sdn.yaml", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "dest": "/tmp/ansible-GjAYpx/sdn.yaml", "selevel": null, "regexp": null, "validate": null, "src": "/root/.ansible/tmp/ansible-tmp-1547045899.32-191334680399648/source", "checksum": "589dc6c3b04b40a2ee786ff8bb365bfcc1066ae4", "seuser": null, "delimiter": null, "mode": null, "attributes": null, "backup": false}}, "size": 7374}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547045899.32-191334680399648/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=/usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn.yaml) => { "changed": true, "checksum": "589dc6c3b04b40a2ee786ff8bb365bfcc1066ae4", "dest": "/tmp/ansible-GjAYpx/sdn.yaml", "diff": [], "gid": 0, "group": "root", "invocation": { "module_args": { "_original_basename": "sdn.yaml", "attributes": null, "backup": false, "checksum": "589dc6c3b04b40a2ee786ff8bb365bfcc1066ae4", "content": null, "delimiter": null, "dest": "/tmp/ansible-GjAYpx/sdn.yaml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": null, "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/root/.ansible/tmp/ansible-tmp-1547045899.32-191334680399648/source", "unsafe_writes": null, "validate": null } }, "item": "/usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn.yaml", "md5sum": "281c47f93a3456cd229fe1bbf060b1bb", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 7374, "src": "/root/.ansible/tmp/ansible-tmp-1547045899.32-191334680399648/source", "state": "file", "uid": 0 } ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547045899.76-220867603429266 `" && echo ansible-tmp-1547045899.76-220867603429266="` echo /root/.ansible/tmp/ansible-tmp-1547045899.76-220867603429266 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547045899.76-220867603429266=/root/.ansible/tmp/ansible-tmp-1547045899.76-220867603429266\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/tmp/ansible-GjAYpx/sdn-policy.yaml", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\n', '') PUT /usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn-policy.yaml TO /root/.ansible/tmp/ansible-tmp-1547045899.76-220867603429266/source SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r '[sp-os-master01.os.ad.scanplus.de]' (0, 'sftp> put /usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn-policy.yaml /root/.ansible/tmp/ansible-tmp-1547045899.76-220867603429266/source\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1547045899.76-220867603429266/ /root/.ansible/tmp/ansible-tmp-1547045899.76-220867603429266/source && sleep 0'"'"'' (0, '', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"src": "/root/.ansible/tmp/ansible-tmp-1547045899.76-220867603429266/source", "changed": true, "group": "root", "uid": 0, "dest": "/tmp/ansible-GjAYpx/sdn-policy.yaml", "checksum": "d59132c739f7758eba4025fc7d4d04f54578355e", "md5sum": "6deb4792c24ad824dd2d67a4e3dd2a09", "owner": "root", "state": "file", "gid": 0, "secontext": "unconfined_u:object_r:admin_home_t:s0", "mode": "0644", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "sdn-policy.yaml", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "dest": "/tmp/ansible-GjAYpx/sdn-policy.yaml", "selevel": null, "regexp": null, "validate": null, "src": "/root/.ansible/tmp/ansible-tmp-1547045899.76-220867603429266/source", "checksum": "d59132c739f7758eba4025fc7d4d04f54578355e", "seuser": null, "delimiter": null, "mode": null, "attributes": null, "backup": false}}, "size": 830}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547045899.76-220867603429266/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=/usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn-policy.yaml) => { "changed": true, "checksum": "d59132c739f7758eba4025fc7d4d04f54578355e", "dest": "/tmp/ansible-GjAYpx/sdn-policy.yaml", "diff": [], "gid": 0, "group": "root", "invocation": { "module_args": { "_original_basename": "sdn-policy.yaml", "attributes": null, "backup": false, "checksum": "d59132c739f7758eba4025fc7d4d04f54578355e", "content": null, "delimiter": null, "dest": "/tmp/ansible-GjAYpx/sdn-policy.yaml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": null, "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/root/.ansible/tmp/ansible-tmp-1547045899.76-220867603429266/source", "unsafe_writes": null, "validate": null } }, "item": "/usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn-policy.yaml", "md5sum": "6deb4792c24ad824dd2d67a4e3dd2a09", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 830, "src": "/root/.ansible/tmp/ansible-tmp-1547045899.76-220867603429266/source", "state": "file", "uid": 0 } ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547045900.27-206649735282093 `" && echo ansible-tmp-1547045900.27-206649735282093="` echo /root/.ansible/tmp/ansible-tmp-1547045900.27-206649735282093 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547045900.27-206649735282093=/root/.ansible/tmp/ansible-tmp-1547045900.27-206649735282093\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/tmp/ansible-GjAYpx/sdn-ovs.yaml", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\n', '') PUT /usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn-ovs.yaml TO /root/.ansible/tmp/ansible-tmp-1547045900.27-206649735282093/source SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r '[sp-os-master01.os.ad.scanplus.de]' (0, 'sftp> put /usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn-ovs.yaml /root/.ansible/tmp/ansible-tmp-1547045900.27-206649735282093/source\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1547045900.27-206649735282093/ /root/.ansible/tmp/ansible-tmp-1547045900.27-206649735282093/source && sleep 0'"'"'' (0, '', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"src": "/root/.ansible/tmp/ansible-tmp-1547045900.27-206649735282093/source", "changed": true, "group": "root", "uid": 0, "dest": "/tmp/ansible-GjAYpx/sdn-ovs.yaml", "checksum": "f055f303d43bf92fc24f92b248baeb87167b3a98", "md5sum": "7ce03a04f5e2df101f95927e84ad9dab", "owner": "root", "state": "file", "gid": 0, "secontext": "unconfined_u:object_r:admin_home_t:s0", "mode": "0644", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "sdn-ovs.yaml", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "dest": "/tmp/ansible-GjAYpx/sdn-ovs.yaml", "selevel": null, "regexp": null, "validate": null, "src": "/root/.ansible/tmp/ansible-tmp-1547045900.27-206649735282093/source", "checksum": "f055f303d43bf92fc24f92b248baeb87167b3a98", "seuser": null, "delimiter": null, "mode": null, "attributes": null, "backup": false}}, "size": 3805}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547045900.27-206649735282093/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=/usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn-ovs.yaml) => { "changed": true, "checksum": "f055f303d43bf92fc24f92b248baeb87167b3a98", "dest": "/tmp/ansible-GjAYpx/sdn-ovs.yaml", "diff": [], "gid": 0, "group": "root", "invocation": { "module_args": { "_original_basename": "sdn-ovs.yaml", "attributes": null, "backup": false, "checksum": "f055f303d43bf92fc24f92b248baeb87167b3a98", "content": null, "delimiter": null, "dest": "/tmp/ansible-GjAYpx/sdn-ovs.yaml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": null, "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/root/.ansible/tmp/ansible-tmp-1547045900.27-206649735282093/source", "unsafe_writes": null, "validate": null } }, "item": "/usr/share/ansible/openshift-ansible/roles/openshift_sdn/files/sdn-ovs.yaml", "md5sum": "7ce03a04f5e2df101f95927e84ad9dab", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 3805, "src": "/root/.ansible/tmp/ansible-tmp-1547045900.27-206649735282093/source", "state": "file", "uid": 0 } TASK [openshift_sdn : Update the image tag] ********************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_sdn/tasks/main.yml:23 Wednesday 09 January 2019 15:58:20 +0100 (0:00:02.188) 0:18:55.049 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "tag.from.name", "src": "/tmp/ansible-GjAYpx/sdn-images.yaml", "backup": false, "update": false, "value": "registry.redhat.io/openshift3/ose-node:v3.11", "backup_ext": ".20190109T155821", "curr_value_format": "yaml", "edits": null, "state": "present", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "present", "changed": true, "result": [{"edit": {"kind": "ImageStreamTag", "tag": {"from": {"kind": "DockerImage", "name": "registry.redhat.io/openshift3/ose-node:v3.11"}, "reference": true}, "apiVersion": "image.openshift.io/v1", "metadata": {"namespace": "openshift-sdn", "name": "node:v3.11"}}, "key": "tag.from.name"}]}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T155821", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": null, "index": null, "key": "tag.from.name", "separator": ".", "src": "/tmp/ansible-GjAYpx/sdn-images.yaml", "state": "present", "update": false, "value": "registry.redhat.io/openshift3/ose-node:v3.11", "value_type": "" } }, "result": [ { "edit": { "apiVersion": "image.openshift.io/v1", "kind": "ImageStreamTag", "metadata": { "name": "node:v3.11", "namespace": "openshift-sdn" }, "tag": { "from": { "kind": "DockerImage", "name": "registry.redhat.io/openshift3/ose-node:v3.11" }, "reference": true } }, "key": "tag.from.name" } ], "state": "present" } TASK [openshift_sdn : Ensure the service account can run privileged] ******************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sdn/tasks/main.yml:29 Wednesday 09 January 2019 15:58:21 +0100 (0:00:00.331) 0:18:55.381 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_adm_policy_user.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"resource_name": "privileged", "rolebinding_name": null, "namespace": "openshift-sdn", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "resource_kind": "scc", "state": "present", "user": "system:serviceaccount:openshift-sdn:sdn", "role_namespace": null, "debug": false}}, "changed": false, "present": "present"}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "debug": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "namespace": "openshift-sdn", "resource_kind": "scc", "resource_name": "privileged", "role_namespace": null, "rolebinding_name": null, "state": "present", "user": "system:serviceaccount:openshift-sdn:sdn" } }, "present": "present" } TASK [openshift_sdn : Remove the image stream tag] ************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sdn/tasks/main.yml:38 Wednesday 09 January 2019 15:58:21 +0100 (0:00:00.736) 0:18:56.117 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:22.343482", "stdout": "imagestreamtag.image.openshift.io \\"node:v3.11\\" deleted", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "delete", "-n", "openshift-sdn", "istag", "node:v3.11", "--ignore-not-found"], "rc": 0, "start": "2019-01-09 15:58:22.086936", "stderr": "", "delta": "0:00:00.256546", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig delete -n openshift-sdn istag node:v3.11 --ignore-not-found", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "delete", "-n", "openshift-sdn", "istag", "node:v3.11", "--ignore-not-found" ], "delta": "0:00:00.256546", "end": "2019-01-09 15:58:22.343482", "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig delete -n openshift-sdn istag node:v3.11 --ignore-not-found", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:22.086936", "stderr": "", "stderr_lines": [], "stdout": "imagestreamtag.image.openshift.io \"node:v3.11\" deleted", "stdout_lines": [ "imagestreamtag.image.openshift.io \"node:v3.11\" deleted" ] } TASK [openshift_sdn : Apply the config] ************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_sdn/tasks/main.yml:50 Wednesday 09 January 2019 15:58:22 +0100 (0:00:00.571) 0:18:56.689 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:23.082315", "stdout": "imagestreamtag.image.openshift.io/node:v3.11 created\\ndaemonset.apps/ovs configured\\nserviceaccount/sdn unchanged\\nclusterrolebinding.authorization.openshift.io/sdn-cluster-reader configured\\nclusterrolebinding.authorization.openshift.io/sdn-reader configured\\nclusterrolebinding.authorization.openshift.io/sdn-node-proxier configured\\ndaemonset.apps/sdn configured", "cmd": "oc --config=/etc/origin/master/admin.kubeconfig apply -f \\"/tmp/ansible-GjAYpx\\"", "rc": 0, "start": "2019-01-09 15:58:22.648335", "stderr": "", "delta": "0:00:00.433980", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig apply -f \\"/tmp/ansible-GjAYpx\\"", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": "oc --config=/etc/origin/master/admin.kubeconfig apply -f \"/tmp/ansible-GjAYpx\"", "delta": "0:00:00.433980", "end": "2019-01-09 15:58:23.082315", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig apply -f \"/tmp/ansible-GjAYpx\"", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:22.648335", "stderr": "", "stderr_lines": [], "stdout": "imagestreamtag.image.openshift.io/node:v3.11 created\ndaemonset.apps/ovs configured\nserviceaccount/sdn unchanged\nclusterrolebinding.authorization.openshift.io/sdn-cluster-reader configured\nclusterrolebinding.authorization.openshift.io/sdn-reader configured\nclusterrolebinding.authorization.openshift.io/sdn-node-proxier configured\ndaemonset.apps/sdn configured", "stdout_lines": [ "imagestreamtag.image.openshift.io/node:v3.11 created", "daemonset.apps/ovs configured", "serviceaccount/sdn unchanged", "clusterrolebinding.authorization.openshift.io/sdn-cluster-reader configured", "clusterrolebinding.authorization.openshift.io/sdn-reader configured", "clusterrolebinding.authorization.openshift.io/sdn-node-proxier configured", "daemonset.apps/sdn configured" ] } TASK [openshift_sdn : Remove temp directory] ******************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_sdn/tasks/main.yml:54 Wednesday 09 January 2019 15:58:23 +0100 (0:00:00.742) 0:18:57.432 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": null, "path": "/tmp/ansible-GjAYpx", "owner": null, "follow": true, "group": null, "unsafe_writes": null, "state": "absent", "content": null, "serole": null, "setype": null, "selevel": null, "regexp": null, "src": null, "name": "/tmp/ansible-GjAYpx", "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "path": "/tmp/ansible-GjAYpx", "state": "absent", "changed": true, "diff": {"after": {"path": "/tmp/ansible-GjAYpx", "state": "absent"}, "before": {"path": "/tmp/ansible-GjAYpx", "state": "directory"}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "diff": { "after": { "path": "/tmp/ansible-GjAYpx", "state": "absent" }, "before": { "path": "/tmp/ansible-GjAYpx", "state": "directory" } }, "invocation": { "module_args": { "_diff_peek": null, "_original_basename": null, "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "follow": true, "force": false, "group": null, "mode": null, "name": "/tmp/ansible-GjAYpx", "owner": null, "path": "/tmp/ansible-GjAYpx", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "absent", "unsafe_writes": null } }, "path": "/tmp/ansible-GjAYpx", "state": "absent" } META: ran handlers META: ran handlers PLAY [Examine etcd serving certificate SAN] ********************************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [slurp] **************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/private/upgrade_main.yml:13 Wednesday 09 January 2019 15:58:23 +0100 (0:00:00.306) 0:18:57.738 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/net_tools/basics/slurp.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"content": "Q2VydGlmaWNhdGU6CiAgICBEYXRhOgogICAgICAgIFZlcnNpb246IDMgKDB4MikKICAgICAgICBTZXJpYWwgTnVtYmVyOiAxICgweDEpCiAgICBTaWduYXR1cmUgQWxnb3JpdGhtOiBzaGEyNTZXaXRoUlNBRW5jcnlwdGlvbgogICAgICAgIElzc3VlcjogQ049ZXRjZC1zaWduZXJAMTUxNzM5OTUwNwogICAgICAgIFZhbGlkaXR5CiAgICAgICAgICAgIE5vdCBCZWZvcmU6IEphbiAzMSAxMTo1MzoyNyAyMDE4IEdNVAogICAgICAgICAgICBOb3QgQWZ0ZXIgOiBKYW4gMzAgMTE6NTM6MjcgMjAyMyBHTVQKICAgICAgICBTdWJqZWN0OiBDTj1zcC1vcy1tYXN0ZXIwMS5vcy5hZC5zY2FucGx1cy5kZQogICAgICAgIFN1YmplY3QgUHVibGljIEtleSBJbmZvOgogICAgICAgICAgICBQdWJsaWMgS2V5IEFsZ29yaXRobTogcnNhRW5jcnlwdGlvbgogICAgICAgICAgICAgICAgUHVibGljLUtleTogKDIwNDggYml0KQogICAgICAgICAgICAgICAgTW9kdWx1czoKICAgICAgICAgICAgICAgICAgICAwMDplNDplMzoyYjowYjpjMzozYjo5NDoyNDplNDo3ODo4YToyMjo2ZjpiOToKICAgICAgICAgICAgICAgICAgICBiMzo3Mzo1ZDpiMzoyZDpkOTo0MDowNzplZDplNzo2Yjo2Nzo5YjpmNTo3YToKICAgICAgICAgICAgICAgICAgICA4Njo3MTpmNjozNzo3YjozNTo2NDozYTpkNTo0ZDoxODowMzpkNTo4NTo2YzoKICAgICAgICAgICAgICAgICAgICA4NTplNDplZDplOTpiNjo4YzpiNjo0NzphYjpkZDo0Mzo3Zjo1Mjo1MDoxYToKICAgICAgICAgICAgICAgICAgICA3Nzo2NjoyZTo1OTpkMTpjODplZDpjOTo4MTo5OTpkMzoyMjo4ZTpmODo4OToKICAgICAgICAgICAgICAgICAgICA4MjphMjoxMDo0MToyMzo0Nzo1NzphMTo5ZDo0ZDoyMTpjMjpjZDo1ODoyNToKICAgICAgICAgICAgICAgICAgICAzOTo4NDo3MzpjNjo1ODo5Mzo2Mzo1NzowNTpmZTowOTpmNjpjNTozNzoxYzoKICAgICAgICAgICAgICAgICAgICBjYjoyMDo3ZToxMjpmNTpiNTozZjo2MDozMTo4ZTpjMDpjMjplYzo3ODo2ODoKICAgICAgICAgICAgICAgICAgICA4ZToyMjpkODphNzo4YzoyODpmZDo4Mjo2ODoxZTo5ZTo5MDpmZTo0NzplNToKICAgICAgICAgICAgICAgICAgICAwMDowODo1MzoyMTo3MjphZDo5YTpiMzo5NjpkYTowYjpmODo5Mjo5ZDoyMToKICAgICAgICAgICAgICAgICAgICBiOTo5Yzo2ODpkYTpjNjo1NzowODpmODo4NzozYjpmZjoxNDowZTpmNjpkMDoKICAgICAgICAgICAgICAgICAgICBhZTpkMDo4MDphZDo1OTo4Zjo0NTplYjplMzpmYTpiNDpiODpkYzoxNDpkNDoKICAgICAgICAgICAgICAgICAgICBiZDowMzozZDpmNTo4ZTo3NTo2MDpmNTo2ZTo4NjpmZDoyMDo2ODoxOTpjYzoKICAgICAgICAgICAgICAgICAgICAzNDpkZDphYzpkYTpjZjoxYTo5YToyZjoyODo1NzozYTo5MzpjZToyNjpiYjoKICAgICAgICAgICAgICAgICAgICBhNDo3ZDphYjo3YzozODplZTo3MzpmMTplNjo3ZjowOTpmNjo3NDpmNzplYzoKICAgICAgICAgICAgICAgICAgICAwYjphMzozZDoxZjpmNDpiMTowYzpiZDozMjo0OToyZjo0OTo4Zjo3MjplZjoKICAgICAgICAgICAgICAgICAgICAwMTplMjoxMzo1NTplMToyYTo2ODoxZDo3MDphMjphNTpiYTozZDpjMTozNToKICAgICAgICAgICAgICAgICAgICBlODoxMwogICAgICAgICAgICAgICAgRXhwb25lbnQ6IDY1NTM3ICgweDEwMDAxKQogICAgICAgIFg1MDl2MyBleHRlbnNpb25zOgogICAgICAgICAgICBYNTA5djMgQXV0aG9yaXR5IEtleSBJZGVudGlmaWVyOiAKICAgICAgICAgICAgICAgIGtleWlkOkQ4OkI5OjhGOjg0OjE3OkU3OjRCOjBCOjgxOjU0OkU1OjIwOjgzOjM5OjFGOjQyOjk2OjZGOjBGOkE3CiAgICAgICAgICAgICAgICBEaXJOYW1lOi9DTj1ldGNkLXNpZ25lckAxNTE3Mzk5NTA3CiAgICAgICAgICAgICAgICBzZXJpYWw6RTI6OEE6MzM6MTQ6OUI6OEY6QUU6RjYKCiAgICAgICAgICAgIFg1MDl2MyBCYXNpYyBDb25zdHJhaW50czogY3JpdGljYWwKICAgICAgICAgICAgICAgIENBOkZBTFNFCiAgICAgICAgICAgIFg1MDl2MyBFeHRlbmRlZCBLZXkgVXNhZ2U6IAogICAgICAgICAgICAgICAgVExTIFdlYiBTZXJ2ZXIgQXV0aGVudGljYXRpb24KICAgICAgICAgICAgWDUwOXYzIEtleSBVc2FnZTogCiAgICAgICAgICAgICAgICBEaWdpdGFsIFNpZ25hdHVyZSwgS2V5IEVuY2lwaGVybWVudAogICAgICAgICAgICBYNTA5djMgU3ViamVjdCBLZXkgSWRlbnRpZmllcjogCiAgICAgICAgICAgICAgICA2NzoyMjowNzo2QTpFRTo5RDo4RTpCRDo3OTpCOTo2NTpCMzpDMDozNjpCNzpCNzoyMjpBMzo5ODo0MgogICAgICAgICAgICBYNTA5djMgU3ViamVjdCBBbHRlcm5hdGl2ZSBOYW1lOiAKICAgICAgICAgICAgICAgIElQIEFkZHJlc3M6MTcyLjMwLjgwLjI0MCwgRE5TOnNwLW9zLW1hc3RlcjAxLm9zLmFkLnNjYW5wbHVzLmRlCiAgICBTaWduYXR1cmUgQWxnb3JpdGhtOiBzaGEyNTZXaXRoUlNBRW5jcnlwdGlvbgogICAgICAgICAwYTowNzpjMTo0NDo4NTpjMzo4OTowNDo2MTpkODowMDo2YTpiMTphYTo5NzpkMzowYjo4ODoKICAgICAgICAgZTU6ZWU6MmE6ZDQ6MTk6YmI6ODA6YzQ6Y2U6NjY6NmU6Y2E6MDA6NWI6Yzk6MzU6ZTI6MWI6CiAgICAgICAgIDk0OmJlOjY3OjMwOjc4OjQ1OjZhOmJiOjQxOjFkOmNiOjA5OjljOjM5OjI1OjNmOjI5OmMxOgogICAgICAgICBhMzo4YjpjNDpiMTozYjpiNjo3NDpmYjozOTo4YjoyODowYjo1MTo5ZDo0Njo5YzplMDoxNzoKICAgICAgICAgMDM6NTc6OGI6N2M6Njc6OTM6YmI6NjA6NzU6ZmM6ZDE6YjU6YmQ6NDA6ZjU6ZGU6OGU6YWY6CiAgICAgICAgIDQyOmE1OmRkOjNmOjY4OjVmOmVlOjQ0OmM1OjVmOjg0OmI4OjRkOjE5OjllOmIxOjEyOmJhOgogICAgICAgICBhNzpiMDpiMjphMzo3YToyZDpjZToxMzpjNzoxMzo0ZjpkNjoyYjowMTphOTpiNjoxNzpkMToKICAgICAgICAgZDU6OTM6MjQ6NDM6OTk6N2U6NDk6ODE6MmE6MTE6MTA6ZjY6Njc6OWQ6MzM6MzQ6Y2Q6ZWI6CiAgICAgICAgIDYzOjZjOjhlOjM1OmM1OmQxOjkyOjUxOmJmOmJiOjZlOmQzOmViOjBjOjhjOjFlOjNmOjIyOgogICAgICAgICBhMzo3ZjphMToyNTo2MjpmZTowYzo5MDo3YzoyMjoxMzo3YjowOTo5ZTplMDo1NjoxMTpkYjoKICAgICAgICAgY2I6MWY6Yjk6NTA6Yzg6ODM6N2U6NmE6M2Y6ZDA6ODA6MjY6MTM6MTA6MTE6YmQ6Nzk6M2Q6CiAgICAgICAgIDhmOjg5OmRjOjVhOjExOmE3OjlhOjc3OmJkOjAwOjZjOmI3OjRlOmIyOjI1OmU3OjQxOmVjOgogICAgICAgICAwNzo2NDphZjo0OToyYjpiMjoyYjo4MzowMDo2Mzo3ZDplNDo3ZTo3ZTo5YToxMjplODo1MDoKICAgICAgICAgY2I6OTk6MzE6MzA6Nzg6MjQ6NmQ6NjY6ZGM6YmE6Yzg6NWM6M2Y6NWU6MmU6ZWQ6Mzg6YmU6CiAgICAgICAgIDg2OmRkOmIxOjQxOjgwOjUxOjZkOjYwOmRhOjM4OmM1OjhjOjA5OmY3OmY2OjNlOmM4OjRiOgogICAgICAgICBhMDoxYjo0MzpmYzowYzo2MzplMDo3OTo4YzoyYjoyNTo4MDoxZTo3Mjo3MjoxYjo3Mzo2MjoKICAgICAgICAgNWQ6NjA6ZjA6Mzk6OWI6NDY6YjA6MDQ6OGE6ZmY6ODM6YjU6ZTQ6NmI6NDc6NGY6YTc6YjE6CiAgICAgICAgIDUyOmQ0OmUwOmQzOjNhOmMwOjdjOjI0OmUxOjI3OjJjOjU0OmVmOjEyOjllOmJkOmI3OjllOgogICAgICAgICAyMjoxODo0NjphYjo1MDoxYzpkYzo5ODo3YTo1ZDo2NTphYzpmZTpiYjplZDpiOTo1NTo5ZDoKICAgICAgICAgN2I6ZWU6MTI6NzA6ZmI6NWI6OWQ6ZmY6ZDY6NjM6N2U6NWE6NjQ6YTY6MWI6M2M6YWI6M2E6CiAgICAgICAgIGM1OjUyOjZmOjkyOjBiOmNmOjc4OjcwOmM1OjZiOjE4OjkxOjFjOjY4OmQ0OmE5OjhkOjZjOgogICAgICAgICA3Njo3NjpjYTo2YTozMTplMDo5Njo4ZjozZDpiYjo0ZDozMzo2ZTpmNTpmYTpjMzo2ODoxODoKICAgICAgICAgZGY6Nzc6YzU6ZGU6NTc6ZTE6NzE6ZDE6MGU6YTU6MDI6NjU6MDk6ZDA6Nzk6OGI6MDQ6YjU6CiAgICAgICAgIGRjOmJiOmZhOmNhOjZjOmQ2OjMyOmQwOmViOmNjOjkwOmJkOmYzOjZjOmRmOjQ4OjFkOjdmOgogICAgICAgICAyMTozODpmYjphNDpjZDo1NTpmMDo3NzplMDozYTozZTpmMDpmNzo3Mjo3MzowOTpjODo5NzoKICAgICAgICAgNWM6NjA6NzM6NzA6ODE6ZTg6YWY6YmY6Y2Q6MzM6Yjk6MGI6ZGM6Y2I6MDM6N2U6ZjM6MjI6CiAgICAgICAgIDE0OjIxOjM1OjUxOjFkOmRlOjg5OmM0OmRiOjMxOmY4OmVhOjA5OmI3Ojc2OjkyOjI2OjZkOgogICAgICAgICAyNjoyYTo0NTo2YTo0NzoyOTo1MTo1Njo0MzozMjo1NDplYTo3NzpjNzo3Yzo0YTo5NDpmMToKICAgICAgICAgNDA6MmQ6MjE6MGE6MTk6ZDM6ZWY6OGMKLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVvRENDQW9pZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFoTVI4d0hRWURWUVFEREJabGRHTmsKTFhOcFoyNWxja0F4TlRFM016azVOVEEzTUI0WERURTRNREV6TVRFeE5UTXlOMW9YRFRJek1ERXpNREV4TlRNeQpOMW93S3pFcE1DY0dBMVVFQXd3Z2MzQXRiM010YldGemRHVnlNREV1YjNNdVlXUXVjMk5oYm5Cc2RYTXVaR1V3CmdnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURrNHlzTHd6dVVKT1I0aWlKdnViTnoKWGJNdDJVQUg3ZWRyWjV2MWVvWng5amQ3TldRNjFVMFlBOVdGYklYazdlbTJqTFpIcTkxRGYxSlFHbmRtTGxuUgp5TzNKZ1puVElvNzRpWUtpRUVFalIxZWhuVTBod3MxWUpUbUVjOFpZazJOWEJmNEo5c1UzSE1zZ2ZoTDF0VDlnCk1ZN0F3dXg0YUk0aTJLZU1LUDJDYUI2ZWtQNUg1UUFJVXlGeXJacXpsdG9MK0pLZElibWNhTnJHVndqNGh6di8KRkE3MjBLN1FnSzFaajBYcjQvcTB1TndVMUwwRFBmV09kV0QxYm9iOUlHZ1p6RFRkck5yUEdwb3ZLRmM2azg0bQp1NlI5cTN3NDduUHg1bjhKOW5UMzdBdWpQUi8wc1F5OU1ra3ZTWTl5N3dIaUUxWGhLbWdkY0tLbHVqM0JOZWdUCkFnTUJBQUdqZ2Rnd2dkVXdVUVlEVlIwakJFb3dTSUFVMkxtUGhCZm5Td3VCVk9VZ2d6a2ZRcFp2RDZlaEphUWoKTUNFeEh6QWRCZ05WQkFNTUZtVjBZMlF0YzJsbmJtVnlRREUxTVRjek9UazFNRGVDQ1FEaWlqTVVtNCt1OWpBTQpCZ05WSFJNQkFmOEVBakFBTUJNR0ExVWRKUVFNTUFvR0NDc0dBUVVGQndNQk1Bc0dBMVVkRHdRRUF3SUZvREFkCkJnTlZIUTRFRmdRVVp5SUhhdTZkanIxNXVXV3p3RGEzdHlLam1FSXdNUVlEVlIwUkJDb3dLSWNFckI1UThJSWcKYzNBdGIzTXRiV0Z6ZEdWeU1ERXViM011WVdRdWMyTmhibkJzZFhNdVpHVXdEUVlKS29aSWh2Y05BUUVMQlFBRApnZ0lCQUFvSHdVU0Z3NGtFWWRnQWFyR3FsOU1MaU9YdUt0UVp1NERFem1adXlnQmJ5VFhpRzVTK1p6QjRSV3E3ClFSM0xDWnc1SlQ4cHdhT0x4TEU3dG5UN09Zc29DMUdkUnB6Z0Z3TlhpM3huazd0Z2RmelJ0YjFBOWQ2T3IwS2wKM1Q5b1grNUV4VitFdUUwWm5yRVN1cWV3c3FONkxjNFR4eE5QMWlzQnFiWVgwZFdUSkVPWmZrbUJLaEVROW1lZApNelRONjJOc2pqWEYwWkpSdjd0dTArc01qQjQvSXFOL29TVmkvZ3lRZkNJVGV3bWU0RllSMjhzZnVWRElnMzVxClA5Q0FKaE1RRWIxNVBZK0ozRm9ScDVwM3ZRQnN0MDZ5SmVkQjdBZGtyMGtyc2l1REFHTjk1SDUrbWhMb1VNdVoKTVRCNEpHMW0zTHJJWEQ5ZUx1MDR2b2Jkc1VHQVVXMWcyampGakFuMzlqN0lTNkFiUS93TVkrQjVqQ3NsZ0I1eQpjaHR6WWwxZzhEbWJSckFFaXYrRHRlUnJSMCtuc1ZMVTROTTZ3SHdrNFNjc1ZPOFNucjIzbmlJWVJxdFFITnlZCmVsMWxyUDY3N2JsVm5YdnVFbkQ3VzUzLzFtTitXbVNtR3p5ck9zVlNiNUlMejNod3hXc1lrUnhvMUttTmJIWjIKeW1veDRKYVBQYnROTTI3MStzTm9HTjkzeGQ1WDRYSFJEcVVDWlFuUWVZc0V0ZHk3K3NwczFqTFE2OHlRdmZOcwozMGdkZnlFNCs2VE5WZkIzNERvKzhQZHljd25JbDF4Z2MzQ0I2SysvelRPNUM5ekxBMzd6SWhRaE5WRWQzb25FCjJ6SDQ2Z20zZHBJbWJTWXFSV3BIS1ZGV1F6SlU2bmZIZkVxVThVQXRJUW9aMCsrTQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==", "source": "/etc/etcd/server.crt", "encoding": "base64", "invocation": {"module_args": {"src": "/etc/etcd/server.crt"}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "content": "Q2VydGlmaWNhdGU6CiAgICBEYXRhOgogICAgICAgIFZlcnNpb246IDMgKDB4MikKICAgICAgICBTZXJpYWwgTnVtYmVyOiAxICgweDEpCiAgICBTaWduYXR1cmUgQWxnb3JpdGhtOiBzaGEyNTZXaXRoUlNBRW5jcnlwdGlvbgogICAgICAgIElzc3VlcjogQ049ZXRjZC1zaWduZXJAMTUxNzM5OTUwNwogICAgICAgIFZhbGlkaXR5CiAgICAgICAgICAgIE5vdCBCZWZvcmU6IEphbiAzMSAxMTo1MzoyNyAyMDE4IEdNVAogICAgICAgICAgICBOb3QgQWZ0ZXIgOiBKYW4gMzAgMTE6NTM6MjcgMjAyMyBHTVQKICAgICAgICBTdWJqZWN0OiBDTj1zcC1vcy1tYXN0ZXIwMS5vcy5hZC5zY2FucGx1cy5kZQogICAgICAgIFN1YmplY3QgUHVibGljIEtleSBJbmZvOgogICAgICAgICAgICBQdWJsaWMgS2V5IEFsZ29yaXRobTogcnNhRW5jcnlwdGlvbgogICAgICAgICAgICAgICAgUHVibGljLUtleTogKDIwNDggYml0KQogICAgICAgICAgICAgICAgTW9kdWx1czoKICAgICAgICAgICAgICAgICAgICAwMDplNDplMzoyYjowYjpjMzozYjo5NDoyNDplNDo3ODo4YToyMjo2ZjpiOToKICAgICAgICAgICAgICAgICAgICBiMzo3Mzo1ZDpiMzoyZDpkOTo0MDowNzplZDplNzo2Yjo2Nzo5YjpmNTo3YToKICAgICAgICAgICAgICAgICAgICA4Njo3MTpmNjozNzo3YjozNTo2NDozYTpkNTo0ZDoxODowMzpkNTo4NTo2YzoKICAgICAgICAgICAgICAgICAgICA4NTplNDplZDplOTpiNjo4YzpiNjo0NzphYjpkZDo0Mzo3Zjo1Mjo1MDoxYToKICAgICAgICAgICAgICAgICAgICA3Nzo2NjoyZTo1OTpkMTpjODplZDpjOTo4MTo5OTpkMzoyMjo4ZTpmODo4OToKICAgICAgICAgICAgICAgICAgICA4MjphMjoxMDo0MToyMzo0Nzo1NzphMTo5ZDo0ZDoyMTpjMjpjZDo1ODoyNToKICAgICAgICAgICAgICAgICAgICAzOTo4NDo3MzpjNjo1ODo5Mzo2Mzo1NzowNTpmZTowOTpmNjpjNTozNzoxYzoKICAgICAgICAgICAgICAgICAgICBjYjoyMDo3ZToxMjpmNTpiNTozZjo2MDozMTo4ZTpjMDpjMjplYzo3ODo2ODoKICAgICAgICAgICAgICAgICAgICA4ZToyMjpkODphNzo4YzoyODpmZDo4Mjo2ODoxZTo5ZTo5MDpmZTo0NzplNToKICAgICAgICAgICAgICAgICAgICAwMDowODo1MzoyMTo3MjphZDo5YTpiMzo5NjpkYTowYjpmODo5Mjo5ZDoyMToKICAgICAgICAgICAgICAgICAgICBiOTo5Yzo2ODpkYTpjNjo1NzowODpmODo4NzozYjpmZjoxNDowZTpmNjpkMDoKICAgICAgICAgICAgICAgICAgICBhZTpkMDo4MDphZDo1OTo4Zjo0NTplYjplMzpmYTpiNDpiODpkYzoxNDpkNDoKICAgICAgICAgICAgICAgICAgICBiZDowMzozZDpmNTo4ZTo3NTo2MDpmNTo2ZTo4NjpmZDoyMDo2ODoxOTpjYzoKICAgICAgICAgICAgICAgICAgICAzNDpkZDphYzpkYTpjZjoxYTo5YToyZjoyODo1NzozYTo5MzpjZToyNjpiYjoKICAgICAgICAgICAgICAgICAgICBhNDo3ZDphYjo3YzozODplZTo3MzpmMTplNjo3ZjowOTpmNjo3NDpmNzplYzoKICAgICAgICAgICAgICAgICAgICAwYjphMzozZDoxZjpmNDpiMTowYzpiZDozMjo0OToyZjo0OTo4Zjo3MjplZjoKICAgICAgICAgICAgICAgICAgICAwMTplMjoxMzo1NTplMToyYTo2ODoxZDo3MDphMjphNTpiYTozZDpjMTozNToKICAgICAgICAgICAgICAgICAgICBlODoxMwogICAgICAgICAgICAgICAgRXhwb25lbnQ6IDY1NTM3ICgweDEwMDAxKQogICAgICAgIFg1MDl2MyBleHRlbnNpb25zOgogICAgICAgICAgICBYNTA5djMgQXV0aG9yaXR5IEtleSBJZGVudGlmaWVyOiAKICAgICAgICAgICAgICAgIGtleWlkOkQ4OkI5OjhGOjg0OjE3OkU3OjRCOjBCOjgxOjU0OkU1OjIwOjgzOjM5OjFGOjQyOjk2OjZGOjBGOkE3CiAgICAgICAgICAgICAgICBEaXJOYW1lOi9DTj1ldGNkLXNpZ25lckAxNTE3Mzk5NTA3CiAgICAgICAgICAgICAgICBzZXJpYWw6RTI6OEE6MzM6MTQ6OUI6OEY6QUU6RjYKCiAgICAgICAgICAgIFg1MDl2MyBCYXNpYyBDb25zdHJhaW50czogY3JpdGljYWwKICAgICAgICAgICAgICAgIENBOkZBTFNFCiAgICAgICAgICAgIFg1MDl2MyBFeHRlbmRlZCBLZXkgVXNhZ2U6IAogICAgICAgICAgICAgICAgVExTIFdlYiBTZXJ2ZXIgQXV0aGVudGljYXRpb24KICAgICAgICAgICAgWDUwOXYzIEtleSBVc2FnZTogCiAgICAgICAgICAgICAgICBEaWdpdGFsIFNpZ25hdHVyZSwgS2V5IEVuY2lwaGVybWVudAogICAgICAgICAgICBYNTA5djMgU3ViamVjdCBLZXkgSWRlbnRpZmllcjogCiAgICAgICAgICAgICAgICA2NzoyMjowNzo2QTpFRTo5RDo4RTpCRDo3OTpCOTo2NTpCMzpDMDozNjpCNzpCNzoyMjpBMzo5ODo0MgogICAgICAgICAgICBYNTA5djMgU3ViamVjdCBBbHRlcm5hdGl2ZSBOYW1lOiAKICAgICAgICAgICAgICAgIElQIEFkZHJlc3M6MTcyLjMwLjgwLjI0MCwgRE5TOnNwLW9zLW1hc3RlcjAxLm9zLmFkLnNjYW5wbHVzLmRlCiAgICBTaWduYXR1cmUgQWxnb3JpdGhtOiBzaGEyNTZXaXRoUlNBRW5jcnlwdGlvbgogICAgICAgICAwYTowNzpjMTo0NDo4NTpjMzo4OTowNDo2MTpkODowMDo2YTpiMTphYTo5NzpkMzowYjo4ODoKICAgICAgICAgZTU6ZWU6MmE6ZDQ6MTk6YmI6ODA6YzQ6Y2U6NjY6NmU6Y2E6MDA6NWI6Yzk6MzU6ZTI6MWI6CiAgICAgICAgIDk0OmJlOjY3OjMwOjc4OjQ1OjZhOmJiOjQxOjFkOmNiOjA5OjljOjM5OjI1OjNmOjI5OmMxOgogICAgICAgICBhMzo4YjpjNDpiMTozYjpiNjo3NDpmYjozOTo4YjoyODowYjo1MTo5ZDo0Njo5YzplMDoxNzoKICAgICAgICAgMDM6NTc6OGI6N2M6Njc6OTM6YmI6NjA6NzU6ZmM6ZDE6YjU6YmQ6NDA6ZjU6ZGU6OGU6YWY6CiAgICAgICAgIDQyOmE1OmRkOjNmOjY4OjVmOmVlOjQ0OmM1OjVmOjg0OmI4OjRkOjE5OjllOmIxOjEyOmJhOgogICAgICAgICBhNzpiMDpiMjphMzo3YToyZDpjZToxMzpjNzoxMzo0ZjpkNjoyYjowMTphOTpiNjoxNzpkMToKICAgICAgICAgZDU6OTM6MjQ6NDM6OTk6N2U6NDk6ODE6MmE6MTE6MTA6ZjY6Njc6OWQ6MzM6MzQ6Y2Q6ZWI6CiAgICAgICAgIDYzOjZjOjhlOjM1OmM1OmQxOjkyOjUxOmJmOmJiOjZlOmQzOmViOjBjOjhjOjFlOjNmOjIyOgogICAgICAgICBhMzo3ZjphMToyNTo2MjpmZTowYzo5MDo3YzoyMjoxMzo3YjowOTo5ZTplMDo1NjoxMTpkYjoKICAgICAgICAgY2I6MWY6Yjk6NTA6Yzg6ODM6N2U6NmE6M2Y6ZDA6ODA6MjY6MTM6MTA6MTE6YmQ6Nzk6M2Q6CiAgICAgICAgIDhmOjg5OmRjOjVhOjExOmE3OjlhOjc3OmJkOjAwOjZjOmI3OjRlOmIyOjI1OmU3OjQxOmVjOgogICAgICAgICAwNzo2NDphZjo0OToyYjpiMjoyYjo4MzowMDo2Mzo3ZDplNDo3ZTo3ZTo5YToxMjplODo1MDoKICAgICAgICAgY2I6OTk6MzE6MzA6Nzg6MjQ6NmQ6NjY6ZGM6YmE6Yzg6NWM6M2Y6NWU6MmU6ZWQ6Mzg6YmU6CiAgICAgICAgIDg2OmRkOmIxOjQxOjgwOjUxOjZkOjYwOmRhOjM4OmM1OjhjOjA5OmY3OmY2OjNlOmM4OjRiOgogICAgICAgICBhMDoxYjo0MzpmYzowYzo2MzplMDo3OTo4YzoyYjoyNTo4MDoxZTo3Mjo3MjoxYjo3Mzo2MjoKICAgICAgICAgNWQ6NjA6ZjA6Mzk6OWI6NDY6YjA6MDQ6OGE6ZmY6ODM6YjU6ZTQ6NmI6NDc6NGY6YTc6YjE6CiAgICAgICAgIDUyOmQ0OmUwOmQzOjNhOmMwOjdjOjI0OmUxOjI3OjJjOjU0OmVmOjEyOjllOmJkOmI3OjllOgogICAgICAgICAyMjoxODo0NjphYjo1MDoxYzpkYzo5ODo3YTo1ZDo2NTphYzpmZTpiYjplZDpiOTo1NTo5ZDoKICAgICAgICAgN2I6ZWU6MTI6NzA6ZmI6NWI6OWQ6ZmY6ZDY6NjM6N2U6NWE6NjQ6YTY6MWI6M2M6YWI6M2E6CiAgICAgICAgIGM1OjUyOjZmOjkyOjBiOmNmOjc4OjcwOmM1OjZiOjE4OjkxOjFjOjY4OmQ0OmE5OjhkOjZjOgogICAgICAgICA3Njo3NjpjYTo2YTozMTplMDo5Njo4ZjozZDpiYjo0ZDozMzo2ZTpmNTpmYTpjMzo2ODoxODoKICAgICAgICAgZGY6Nzc6YzU6ZGU6NTc6ZTE6NzE6ZDE6MGU6YTU6MDI6NjU6MDk6ZDA6Nzk6OGI6MDQ6YjU6CiAgICAgICAgIGRjOmJiOmZhOmNhOjZjOmQ2OjMyOmQwOmViOmNjOjkwOmJkOmYzOjZjOmRmOjQ4OjFkOjdmOgogICAgICAgICAyMTozODpmYjphNDpjZDo1NTpmMDo3NzplMDozYTozZTpmMDpmNzo3Mjo3MzowOTpjODo5NzoKICAgICAgICAgNWM6NjA6NzM6NzA6ODE6ZTg6YWY6YmY6Y2Q6MzM6Yjk6MGI6ZGM6Y2I6MDM6N2U6ZjM6MjI6CiAgICAgICAgIDE0OjIxOjM1OjUxOjFkOmRlOjg5OmM0OmRiOjMxOmY4OmVhOjA5OmI3Ojc2OjkyOjI2OjZkOgogICAgICAgICAyNjoyYTo0NTo2YTo0NzoyOTo1MTo1Njo0MzozMjo1NDplYTo3NzpjNzo3Yzo0YTo5NDpmMToKICAgICAgICAgNDA6MmQ6MjE6MGE6MTk6ZDM6ZWY6OGMKLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVvRENDQW9pZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFoTVI4d0hRWURWUVFEREJabGRHTmsKTFhOcFoyNWxja0F4TlRFM016azVOVEEzTUI0WERURTRNREV6TVRFeE5UTXlOMW9YRFRJek1ERXpNREV4TlRNeQpOMW93S3pFcE1DY0dBMVVFQXd3Z2MzQXRiM010YldGemRHVnlNREV1YjNNdVlXUXVjMk5oYm5Cc2RYTXVaR1V3CmdnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURrNHlzTHd6dVVKT1I0aWlKdnViTnoKWGJNdDJVQUg3ZWRyWjV2MWVvWng5amQ3TldRNjFVMFlBOVdGYklYazdlbTJqTFpIcTkxRGYxSlFHbmRtTGxuUgp5TzNKZ1puVElvNzRpWUtpRUVFalIxZWhuVTBod3MxWUpUbUVjOFpZazJOWEJmNEo5c1UzSE1zZ2ZoTDF0VDlnCk1ZN0F3dXg0YUk0aTJLZU1LUDJDYUI2ZWtQNUg1UUFJVXlGeXJacXpsdG9MK0pLZElibWNhTnJHVndqNGh6di8KRkE3MjBLN1FnSzFaajBYcjQvcTB1TndVMUwwRFBmV09kV0QxYm9iOUlHZ1p6RFRkck5yUEdwb3ZLRmM2azg0bQp1NlI5cTN3NDduUHg1bjhKOW5UMzdBdWpQUi8wc1F5OU1ra3ZTWTl5N3dIaUUxWGhLbWdkY0tLbHVqM0JOZWdUCkFnTUJBQUdqZ2Rnd2dkVXdVUVlEVlIwakJFb3dTSUFVMkxtUGhCZm5Td3VCVk9VZ2d6a2ZRcFp2RDZlaEphUWoKTUNFeEh6QWRCZ05WQkFNTUZtVjBZMlF0YzJsbmJtVnlRREUxTVRjek9UazFNRGVDQ1FEaWlqTVVtNCt1OWpBTQpCZ05WSFJNQkFmOEVBakFBTUJNR0ExVWRKUVFNTUFvR0NDc0dBUVVGQndNQk1Bc0dBMVVkRHdRRUF3SUZvREFkCkJnTlZIUTRFRmdRVVp5SUhhdTZkanIxNXVXV3p3RGEzdHlLam1FSXdNUVlEVlIwUkJDb3dLSWNFckI1UThJSWcKYzNBdGIzTXRiV0Z6ZEdWeU1ERXViM011WVdRdWMyTmhibkJzZFhNdVpHVXdEUVlKS29aSWh2Y05BUUVMQlFBRApnZ0lCQUFvSHdVU0Z3NGtFWWRnQWFyR3FsOU1MaU9YdUt0UVp1NERFem1adXlnQmJ5VFhpRzVTK1p6QjRSV3E3ClFSM0xDWnc1SlQ4cHdhT0x4TEU3dG5UN09Zc29DMUdkUnB6Z0Z3TlhpM3huazd0Z2RmelJ0YjFBOWQ2T3IwS2wKM1Q5b1grNUV4VitFdUUwWm5yRVN1cWV3c3FONkxjNFR4eE5QMWlzQnFiWVgwZFdUSkVPWmZrbUJLaEVROW1lZApNelRONjJOc2pqWEYwWkpSdjd0dTArc01qQjQvSXFOL29TVmkvZ3lRZkNJVGV3bWU0RllSMjhzZnVWRElnMzVxClA5Q0FKaE1RRWIxNVBZK0ozRm9ScDVwM3ZRQnN0MDZ5SmVkQjdBZGtyMGtyc2l1REFHTjk1SDUrbWhMb1VNdVoKTVRCNEpHMW0zTHJJWEQ5ZUx1MDR2b2Jkc1VHQVVXMWcyampGakFuMzlqN0lTNkFiUS93TVkrQjVqQ3NsZ0I1eQpjaHR6WWwxZzhEbWJSckFFaXYrRHRlUnJSMCtuc1ZMVTROTTZ3SHdrNFNjc1ZPOFNucjIzbmlJWVJxdFFITnlZCmVsMWxyUDY3N2JsVm5YdnVFbkQ3VzUzLzFtTitXbVNtR3p5ck9zVlNiNUlMejNod3hXc1lrUnhvMUttTmJIWjIKeW1veDRKYVBQYnROTTI3MStzTm9HTjkzeGQ1WDRYSFJEcVVDWlFuUWVZc0V0ZHk3K3NwczFqTFE2OHlRdmZOcwozMGdkZnlFNCs2VE5WZkIzNERvKzhQZHljd25JbDF4Z2MzQ0I2SysvelRPNUM5ekxBMzd6SWhRaE5WRWQzb25FCjJ6SDQ2Z20zZHBJbWJTWXFSV3BIS1ZGV1F6SlU2bmZIZkVxVThVQXRJUW9aMCsrTQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==", "encoding": "base64", "invocation": { "module_args": { "src": "/etc/etcd/server.crt" } }, "source": "/etc/etcd/server.crt" } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/private/upgrade_main.yml:16 Wednesday 09 January 2019 15:58:23 +0100 (0:00:00.322) 0:18:58.060 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "__etcd_cert_lacks_hostname": false }, "changed": false } META: ran handlers META: ran handlers PLAY [Check cert expirys] *************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [openshift_certificate_expiry : Ensure python dateutil library is present] ********************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_certificate_expiry/tasks/main.yml:2 Wednesday 09 January 2019 15:58:23 +0100 (0:00:00.152) 0:18:58.213 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_certificate_expiry : Check cert expirys on host] ************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_certificate_expiry/tasks/main.yml:8 Wednesday 09 January 2019 15:58:24 +0100 (0:00:00.196) 0:18:58.409 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_certificate_expiry : Generate expiration report HTML] ******************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_certificate_expiry/tasks/main.yml:15 Wednesday 09 January 2019 15:58:24 +0100 (0:00:00.196) 0:18:58.605 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_certificate_expiry : Generate results JSON file] ************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_certificate_expiry/tasks/main.yml:26 Wednesday 09 January 2019 15:58:24 +0100 (0:00:00.232) 0:18:58.838 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_certificate_expiry : Fail when certs are near or already expired] ******************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_certificate_expiry/tasks/main.yml:39 Wednesday 09 January 2019 15:58:24 +0100 (0:00:00.212) 0:18:59.050 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Backup and remove generated etcd certificates] ************************************************************************************************************************************************************************************************************************************************************************ META: ran handlers TASK [etcd : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup_generated_certificates.yml:2 Wednesday 09 January 2019 15:58:25 +0100 (0:00:00.208) 0:18:59.258 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/remove_generated_certificates.yml:2 Wednesday 09 January 2019 15:58:25 +0100 (0:00:00.199) 0:18:59.458 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Backup deployed etcd certificates] ************************************************************************************************************************************************************************************************************************************************************************************ META: ran handlers TASK [etcd : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup_server_certificates.yml:2 Wednesday 09 January 2019 15:58:25 +0100 (0:00:00.514) 0:18:59.972 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Create etcd server certificates for etcd hosts] *********************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [etcd : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/server_certificates.yml:2 Wednesday 09 January 2019 15:58:25 +0100 (0:00:00.229) 0:19:00.202 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/server_certificates.yml:6 Wednesday 09 January 2019 15:58:26 +0100 (0:00:00.204) 0:19:00.407 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Create etcd client certificates for master hosts] ********************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [etcd : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/client_certificates.yml:2 Wednesday 09 January 2019 15:58:26 +0100 (0:00:00.226) 0:19:00.633 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Restart etcd] ********************************************************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [etcd : restart etcd] ************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/restart.yml:2 Wednesday 09 January 2019 15:58:26 +0100 (0:00:00.201) 0:19:00.835 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Restart etcd] ********************************************************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [etcd : restart etcd] ************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/restart.yml:2 Wednesday 09 January 2019 15:58:26 +0100 (0:00:00.220) 0:19:01.055 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Validate configuration for rolling restart] *************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [fail] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/validate_restart.yml:7 Wednesday 09 January 2019 15:58:27 +0100 (0:00:00.219) 0:19:01.274 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Create temp file on localhost] **************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [command] ************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/validate_restart.yml:19 Wednesday 09 January 2019 15:58:27 +0100 (0:00:00.135) 0:19:01.409 ***** skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Check if temp file exists on any masters] ***************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [stat] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/validate_restart.yml:26 Wednesday 09 January 2019 15:58:27 +0100 (0:00:00.209) 0:19:01.619 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Cleanup temp file on localhost] *************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [file] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/validate_restart.yml:40 Wednesday 09 January 2019 15:58:27 +0100 (0:00:00.126) 0:19:01.745 ***** skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Warn if restarting the system where ansible is running] *************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [pause] **************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/validate_restart.yml:47 Wednesday 09 January 2019 15:58:27 +0100 (0:00:00.220) 0:19:01.965 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/validate_restart.yml:59 Wednesday 09 January 2019 15:58:27 +0100 (0:00:00.199) 0:19:02.165 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Restart masters] ****************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers META: ran handlers TASK [Restart master system] ************************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/tasks/restart_hosts.yml:2 Wednesday 09 January 2019 15:58:28 +0100 (0:00:00.210) 0:19:02.376 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Wait for master to restart] ******************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/tasks/restart_hosts.yml:10 Wednesday 09 January 2019 15:58:28 +0100 (0:00:00.207) 0:19:02.584 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Wait for master API to come back online] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/tasks/restart_hosts.yml:17 Wednesday 09 January 2019 15:58:28 +0100 (0:00:00.206) 0:19:02.790 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : restart master] ***************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/restart.yml:2 Wednesday 09 January 2019 15:58:28 +0100 (0:00:00.222) 0:19:03.012 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => (item=api) => { "changed": false, "item": "api", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=controllers) => { "changed": false, "item": "controllers", "skip_reason": "Conditional result was False" } META: ran handlers PLAY [Backup etcd] ********************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [etcd : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup.yml:2 Wednesday 09 January 2019 15:58:29 +0100 (0:00:00.303) 0:19:03.316 ***** included: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml for sp-os-master01.os.ad.scanplus.de TASK [etcd : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:2 Wednesday 09 January 2019 15:58:29 +0100 (0:00:00.201) 0:19:03.517 ***** included: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/vars.yml for sp-os-master01.os.ad.scanplus.de TASK [etcd : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/vars.yml:5 Wednesday 09 January 2019 15:58:29 +0100 (0:00:00.187) 0:19:03.705 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "l_backup_dir_name": "openshift-backup-pre-upgrade-20190109155829" }, "changed": false } TASK [etcd : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/vars.yml:8 Wednesday 09 January 2019 15:58:29 +0100 (0:00:00.396) 0:19:04.102 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "l_etcd_incontainer_data_dir": "/var/lib/etcd/" }, "changed": false } TASK [etcd : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/vars.yml:11 Wednesday 09 January 2019 15:58:30 +0100 (0:00:00.148) 0:19:04.251 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "l_etcd_incontainer_backup_dir": "/var/lib/etcd//openshift-backup-pre-upgrade-20190109155829" }, "changed": false } TASK [etcd : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/vars.yml:14 Wednesday 09 January 2019 15:58:30 +0100 (0:00:00.138) 0:19:04.390 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "l_etcd_backup_dir": "/var/lib/etcd//openshift-backup-pre-upgrade-20190109155829" }, "changed": false } TASK [etcd : Check available disk space for etcd backup] ******************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:5 Wednesday 09 January 2019 15:58:30 +0100 (0:00:00.145) 0:19:04.536 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:30.507826", "stdout": "31619720", "cmd": "df --output=avail -k /var/lib/etcd/ | tail -n 1", "rc": 0, "start": "2019-01-09 15:58:30.502797", "stderr": "", "delta": "0:00:00.005029", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "_raw_params": "df --output=avail -k /var/lib/etcd/ | tail -n 1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "cmd": "df --output=avail -k /var/lib/etcd/ | tail -n 1", "delta": "0:00:00.005029", "end": "2019-01-09 15:58:30.507826", "invocation": { "module_args": { "_raw_params": "df --output=avail -k /var/lib/etcd/ | tail -n 1", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:30.502797", "stderr": "", "stderr_lines": [], "stdout": "31619720", "stdout_lines": [ "31619720" ] } TASK [etcd : Check current etcd disk usage] ********************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:13 Wednesday 09 January 2019 15:58:30 +0100 (0:00:00.326) 0:19:04.862 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:30.823503", "stdout": "644048", "cmd": "du --exclude=\'*openshift-backup*\' -k /var/lib/etcd/ | tail -n 1 | cut -f1", "rc": 0, "start": "2019-01-09 15:58:30.816150", "stderr": "", "delta": "0:00:00.007353", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "_raw_params": "du --exclude=\'*openshift-backup*\' -k /var/lib/etcd/ | tail -n 1 | cut -f1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "cmd": "du --exclude='*openshift-backup*' -k /var/lib/etcd/ | tail -n 1 | cut -f1", "delta": "0:00:00.007353", "end": "2019-01-09 15:58:30.823503", "invocation": { "module_args": { "_raw_params": "du --exclude='*openshift-backup*' -k /var/lib/etcd/ | tail -n 1 | cut -f1", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:30.816150", "stderr": "", "stderr_lines": [], "stdout": "644048", "stdout_lines": [ "644048" ] } TASK [etcd : Abort if insufficient disk space for etcd backup] ************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:20 Wednesday 09 January 2019 15:58:30 +0100 (0:00:00.309) 0:19:05.172 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : Check selinux label of '/var/lib/etcd/'] *********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:32 Wednesday 09 January 2019 15:58:31 +0100 (0:00:00.117) 0:19:05.289 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:31.281240", "stdout": "system_u:object_r:container_file_t:s0", "cmd": ["stat", "-c", "%C", "/var/lib/etcd/"], "rc": 0, "start": "2019-01-09 15:58:31.274849", "stderr": "", "delta": "0:00:00.006391", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "stat -c \'%C\' /var/lib/etcd/", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "stat", "-c", "%C", "/var/lib/etcd/" ], "delta": "0:00:00.006391", "end": "2019-01-09 15:58:31.281240", "invocation": { "module_args": { "_raw_params": "stat -c '%C' /var/lib/etcd/", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:31.274849", "stderr": "", "stderr_lines": [], "stdout": "system_u:object_r:container_file_t:s0", "stdout_lines": [ "system_u:object_r:container_file_t:s0" ] } TASK [etcd : debug] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:37 Wednesday 09 January 2019 15:58:31 +0100 (0:00:00.344) 0:19:05.634 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "msg": { "changed": true, "cmd": [ "stat", "-c", "%C", "/var/lib/etcd/" ], "delta": "0:00:00.006391", "end": "2019-01-09 15:58:31.281240", "failed": false, "rc": 0, "start": "2019-01-09 15:58:31.274849", "stderr": "", "stderr_lines": [], "stdout": "system_u:object_r:container_file_t:s0", "stdout_lines": [ "system_u:object_r:container_file_t:s0" ] } } TASK [etcd : Make sure the '/var/lib/etcd/' has the proper label] *********************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:40 Wednesday 09 January 2019 15:58:31 +0100 (0:00:00.151) 0:19:05.786 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:31.774208", "stdout": "", "cmd": ["chcon", "-t", "svirt_sandbox_file_t", "/var/lib/etcd/"], "rc": 0, "start": "2019-01-09 15:58:31.770566", "stderr": "", "delta": "0:00:00.003642", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "chcon -t svirt_sandbox_file_t \\"/var/lib/etcd/\\"", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "chcon", "-t", "svirt_sandbox_file_t", "/var/lib/etcd/" ], "delta": "0:00:00.003642", "end": "2019-01-09 15:58:31.774208", "invocation": { "module_args": { "_raw_params": "chcon -t svirt_sandbox_file_t \"/var/lib/etcd/\"", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:31.770566", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } TASK [etcd : Generate etcd backup] ****************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:47 Wednesday 09 January 2019 15:58:31 +0100 (0:00:00.328) 0:19:06.114 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:34.276292", "stdout": "", "cmd": ["/usr/local/bin/master-exec", "etcd", "etcd", "etcdctl", "backup", "--data-dir=/var/lib/etcd/", "--backup-dir=/var/lib/etcd//openshift-backup-pre-upgrade-20190109155829"], "rc": 0, "start": "2019-01-09 15:58:32.093066", "stderr": "2019-01-09 14:58:34.225810 I | wal: segmented wal file /var/lib/etcd/openshift-backup-pre-upgrade-20190109155829/member/wal/0000000000000001-0000000006a382e5.wal is created", "delta": "0:00:02.183226", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "/usr/local/bin/master-exec etcd etcd etcdctl backup --data-dir=/var/lib/etcd/ --backup-dir=/var/lib/etcd//openshift-backup-pre-upgrade-20190109155829", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "/usr/local/bin/master-exec", "etcd", "etcd", "etcdctl", "backup", "--data-dir=/var/lib/etcd/", "--backup-dir=/var/lib/etcd//openshift-backup-pre-upgrade-20190109155829" ], "delta": "0:00:02.183226", "end": "2019-01-09 15:58:34.276292", "invocation": { "module_args": { "_raw_params": "/usr/local/bin/master-exec etcd etcd etcdctl backup --data-dir=/var/lib/etcd/ --backup-dir=/var/lib/etcd//openshift-backup-pre-upgrade-20190109155829", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:32.093066", "stderr": "2019-01-09 14:58:34.225810 I | wal: segmented wal file /var/lib/etcd/openshift-backup-pre-upgrade-20190109155829/member/wal/0000000000000001-0000000006a382e5.wal is created", "stderr_lines": [ "2019-01-09 14:58:34.225810 I | wal: segmented wal file /var/lib/etcd/openshift-backup-pre-upgrade-20190109155829/member/wal/0000000000000001-0000000006a382e5.wal is created" ], "stdout": "", "stdout_lines": [] } TASK [etcd : Check for v3 data store] *************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:54 Wednesday 09 January 2019 15:58:34 +0100 (0:00:02.514) 0:19:08.628 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/etcd//member/snap/db", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 996, "exists": true, "woth": false, "device_type": 0, "mtime": 1547045914.591503, "block_size": 4096, "inode": 519580, "isgid": false, "size": 213491712, "wgrp": false, "executable": false, "isuid": false, "readable": true, "isreg": true, "pw_name": "etcd", "gid": 993, "ischr": false, "wusr": true, "writeable": true, "blocks": 412824, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": false, "gr_name": "etcd", "path": "/var/lib/etcd//member/snap/db", "xusr": false, "atime": 1547042370.2364523, "isdir": false, "ctime": 1547045914.591503, "isblk": false, "xgrp": false, "dev": 64771, "roth": false, "isfifo": false, "mode": "0600", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/etcd//member/snap/db" } }, "stat": { "atime": 1547042370.2364523, "block_size": 4096, "blocks": 412824, "ctime": 1547045914.591503, "dev": 64771, "device_type": 0, "executable": false, "exists": true, "gid": 993, "gr_name": "etcd", "inode": 519580, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1547045914.591503, "nlink": 1, "path": "/var/lib/etcd//member/snap/db", "pw_name": "etcd", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 213491712, "uid": 996, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [etcd : Copy etcd v3 data store] *************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:62 Wednesday 09 January 2019 15:58:34 +0100 (0:00:00.318) 0:19:08.947 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:35.058331", "stdout": "", "cmd": ["cp", "-a", "/var/lib/etcd//member/snap/db", "/var/lib/etcd//openshift-backup-pre-upgrade-20190109155829/member/snap/"], "rc": 0, "start": "2019-01-09 15:58:34.879056", "stderr": "", "delta": "0:00:00.179275", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "cp -a /var/lib/etcd//member/snap/db /var/lib/etcd//openshift-backup-pre-upgrade-20190109155829/member/snap/", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "cp", "-a", "/var/lib/etcd//member/snap/db", "/var/lib/etcd//openshift-backup-pre-upgrade-20190109155829/member/snap/" ], "delta": "0:00:00.179275", "end": "2019-01-09 15:58:35.058331", "invocation": { "module_args": { "_raw_params": "cp -a /var/lib/etcd//member/snap/db /var/lib/etcd//openshift-backup-pre-upgrade-20190109155829/member/snap/", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:34.879056", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } TASK [etcd : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:68 Wednesday 09 January 2019 15:58:35 +0100 (0:00:00.438) 0:19:09.385 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "r_etcd_common_backup_complete": true }, "changed": false } TASK [etcd : Display location of etcd backup] ******************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:71 Wednesday 09 January 2019 15:58:35 +0100 (0:00:00.126) 0:19:09.512 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "msg": "Etcd backup created in /var/lib/etcd//openshift-backup-pre-upgrade-20190109155829" } META: ran handlers META: ran handlers PLAY [Gate on etcd backup] ************************************************************************************************************************************************************************************************************************************************************************************************** TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/private/upgrade_backup.yml:13 Wednesday 09 January 2019 15:58:35 +0100 (0:00:00.069) 0:19:09.581 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root <127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 && sleep 0' ok: [localhost] META: ran handlers TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/private/upgrade_backup.yml:17 Wednesday 09 January 2019 15:58:36 +0100 (0:00:00.919) 0:19:10.501 ***** ok: [localhost] => { "ansible_facts": { "etcd_backup_completed": [ "sp-os-master01.os.ad.scanplus.de" ] }, "changed": false } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/private/upgrade_backup.yml:21 Wednesday 09 January 2019 15:58:36 +0100 (0:00:00.480) 0:19:10.982 ***** ok: [localhost] => { "ansible_facts": { "etcd_backup_failed": [] }, "changed": false } TASK [fail] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/private/upgrade_backup.yml:23 Wednesday 09 January 2019 15:58:36 +0100 (0:00:00.107) 0:19:11.090 ***** skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Drop etcdctl profiles] ************************************************************************************************************************************************************************************************************************************************************************************************ META: ran handlers TASK [etcd : Configure etcd profile.d aliases] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/drop_etcdctl.yml:2 Wednesday 09 January 2019 15:58:36 +0100 (0:00:00.123) 0:19:11.213 ***** ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547045917.03-195514750757817 `" && echo ansible-tmp-1547045917.03-195514750757817="` echo /root/.ansible/tmp/ansible-tmp-1547045917.03-195514750757817 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547045917.03-195514750757817=/root/.ansible/tmp/ansible-tmp-1547045917.03-195514750757817\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/etc/profile.d/etcdctl.sh", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "us-ascii", "uid": 0, "exists": true, "attr_flags": "e", "woth": false, "isreg": true, "device_type": 0, "mtime": 1547019657.9985855, "block_size": 4096, "inode": 1180426, "isgid": false, "size": 833, "executable": true, "isuid": false, "readable": true, "version": "18446744073061000206", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "mimetype": "text/x-shellscript", "blocks": 8, "xoth": true, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/etc/profile.d/etcdctl.sh", "xusr": true, "atime": 1547019658.2075894, "isdir": false, "ctime": 1547019658.1705887, "isblk": false, "wgrp": false, "checksum": "67725f6a8671eecd798de52ad1df45a4b61883c7", "dev": 64769, "roth": true, "isfifo": false, "mode": "0755", "xgrp": true, "rusr": true, "attributes": ["extents"]}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"group": "root", "uid": 0, "changed": false, "owner": "root", "state": "file", "gid": 0, "secontext": "system_u:object_r:bin_t:s0", "mode": "0755", "path": "/etc/profile.d/etcdctl.sh", "invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": "etcdctl.sh.j2", "path": "/etc/profile.d/etcdctl.sh", "owner": "root", "follow": false, "group": "root", "unsafe_writes": null, "serole": null, "content": null, "state": "file", "setype": null, "dest": "/etc/profile.d/etcdctl.sh", "selevel": null, "regexp": null, "src": null, "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": 493, "attributes": null, "backup": null}}, "diff": {"after": {"path": "/etc/profile.d/etcdctl.sh"}, "before": {"path": "/etc/profile.d/etcdctl.sh"}}, "size": 833}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547045917.03-195514750757817/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "checksum": "67725f6a8671eecd798de52ad1df45a4b61883c7", "dest": "/etc/profile.d/etcdctl.sh", "diff": { "after": { "path": "/etc/profile.d/etcdctl.sh" }, "before": { "path": "/etc/profile.d/etcdctl.sh" } }, "gid": 0, "group": "root", "invocation": { "module_args": { "_diff_peek": null, "_original_basename": "etcdctl.sh.j2", "attributes": null, "backup": null, "content": null, "delimiter": null, "dest": "/etc/profile.d/etcdctl.sh", "directory_mode": null, "follow": false, "force": false, "group": "root", "mode": 493, "owner": "root", "path": "/etc/profile.d/etcdctl.sh", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "file", "unsafe_writes": null } }, "mode": "0755", "owner": "root", "path": "/etc/profile.d/etcdctl.sh", "secontext": "system_u:object_r:bin_t:s0", "size": 833, "state": "file", "uid": 0 } META: ran handlers META: ran handlers PLAY [Determine etcd version] *********************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [etcd : Record RPM based etcd version] ********************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/version_detect.yml:4 Wednesday 09 January 2019 15:58:37 +0100 (0:00:00.521) 0:19:11.735 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : debug] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/version_detect.yml:13 Wednesday 09 January 2019 15:58:37 +0100 (0:00:00.111) 0:19:11.846 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} META: ran handlers META: ran handlers PLAY [Upgrade to 3.2] ******************************************************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [etcd : Verify cluster is healthy] ************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/verify_cluster_health.yml:2 Wednesday 09 January 2019 15:58:37 +0100 (0:00:00.122) 0:19:11.968 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/upgrade_rpm.yml:13 Wednesday 09 January 2019 15:58:37 +0100 (0:00:00.119) 0:19:12.087 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : Update etcd RPM to {{ l_etcd_target_package }}] **************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/upgrade_rpm.yml:16 Wednesday 09 January 2019 15:58:37 +0100 (0:00:00.116) 0:19:12.204 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : Ensure ETCD_CA_FILE is absent] ********************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/validate_etcd_conf.yml:5 Wednesday 09 January 2019 15:58:38 +0100 (0:00:00.113) 0:19:12.318 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : Ensure ETCD_PEER_CA_FILE is absent] **************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/validate_etcd_conf.yml:11 Wednesday 09 January 2019 15:58:38 +0100 (0:00:00.124) 0:19:12.442 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : Ensure ETCD_QUOTA_BACKEND_BYTES exists] ************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/validate_etcd_conf.yml:17 Wednesday 09 January 2019 15:58:38 +0100 (0:00:00.114) 0:19:12.557 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : Ensure ETCD_CLIENT_CERT_AUTH exists] *************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/validate_etcd_conf.yml:23 Wednesday 09 January 2019 15:58:38 +0100 (0:00:00.111) 0:19:12.668 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : Ensure ETCD_PEER_CLIENT_CERT_AUTH exists] ********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/validate_etcd_conf.yml:29 Wednesday 09 January 2019 15:58:38 +0100 (0:00:00.115) 0:19:12.784 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : Ensure ETCD_TRUSTED_CA_FILE exists] **************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/validate_etcd_conf.yml:35 Wednesday 09 January 2019 15:58:38 +0100 (0:00:00.121) 0:19:12.905 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : Ensure ETCD_PEER_TRUSTED_CA_FILE exists] *********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/validate_etcd_conf.yml:41 Wednesday 09 January 2019 15:58:38 +0100 (0:00:00.106) 0:19:13.012 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : restart etcd] ************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/upgrade_rpm.yml:25 Wednesday 09 January 2019 15:58:38 +0100 (0:00:00.108) 0:19:13.121 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : Verify cluster is healthy] ************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/verify_cluster_health.yml:2 Wednesday 09 January 2019 15:58:38 +0100 (0:00:00.107) 0:19:13.228 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Upgrade etcd static pods] ********************************************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [etcd : Verify cluster is healthy] ************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/verify_cluster_health.yml:2 Wednesday 09 January 2019 15:58:39 +0100 (0:00:00.144) 0:19:13.372 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:39.461795", "stdout": "member 2cffc34cde3715e2 is healthy: got healthy result from https://172.30.80.240:2379\\ncluster is healthy", "cmd": ["/usr/local/bin/master-exec", "etcd", "etcd", "etcdctl", "--cert-file", "/etc/etcd/peer.crt", "--key-file", "/etc/etcd/peer.key", "--ca-file", "/etc/etcd/ca.crt", "--endpoints", "https://sp-os-master01.os.ad.scanplus.de:2379", "cluster-health"], "rc": 0, "start": "2019-01-09 15:58:39.320789", "stderr": "", "delta": "0:00:00.141006", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "/usr/local/bin/master-exec etcd etcd etcdctl --cert-file /etc/etcd/peer.crt --key-file /etc/etcd/peer.key --ca-file /etc/etcd/ca.crt --endpoints https://sp-os-master01.os.ad.scanplus.de:2379 cluster-health", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": true, "cmd": [ "/usr/local/bin/master-exec", "etcd", "etcd", "etcdctl", "--cert-file", "/etc/etcd/peer.crt", "--key-file", "/etc/etcd/peer.key", "--ca-file", "/etc/etcd/ca.crt", "--endpoints", "https://sp-os-master01.os.ad.scanplus.de:2379", "cluster-health" ], "delta": "0:00:00.141006", "end": "2019-01-09 15:58:39.461795", "invocation": { "module_args": { "_raw_params": "/usr/local/bin/master-exec etcd etcd etcdctl --cert-file /etc/etcd/peer.crt --key-file /etc/etcd/peer.key --ca-file /etc/etcd/ca.crt --endpoints https://sp-os-master01.os.ad.scanplus.de:2379 cluster-health", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:39.320789", "stderr": "", "stderr_lines": [], "stdout": "member 2cffc34cde3715e2 is healthy: got healthy result from https://172.30.80.240:2379\ncluster is healthy", "stdout_lines": [ "member 2cffc34cde3715e2 is healthy: got healthy result from https://172.30.80.240:2379", "cluster is healthy" ] } TASK [etcd : Check for old etcd service files] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/upgrade_static.yml:8 Wednesday 09 January 2019 15:58:39 +0100 (0:00:00.439) 0:19:13.811 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/etc/systemd/system/etcd.service", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "lnk_target": "/dev/null", "woth": true, "device_type": 0, "mtime": 1536865741.2514122, "block_size": 4096, "inode": 531778, "isgid": false, "size": 9, "wgrp": true, "executable": false, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 0, "xoth": true, "islnk": true, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/etc/systemd/system/etcd.service", "xusr": true, "atime": 1547045415.3829203, "lnk_source": "/dev/null", "isdir": false, "ctime": 1536865741.2514122, "isblk": false, "xgrp": true, "dev": 64769, "roth": true, "isfifo": false, "mode": "0777", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => (item=/etc/systemd/system/etcd.service) => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/etc/systemd/system/etcd.service" } }, "item": "/etc/systemd/system/etcd.service", "stat": { "atime": 1547045415.3829203, "block_size": 4096, "blocks": 0, "ctime": 1536865741.2514122, "dev": 64769, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 531778, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": true, "isreg": false, "issock": false, "isuid": false, "lnk_source": "/dev/null", "lnk_target": "/dev/null", "mode": "0777", "mtime": 1536865741.2514122, "nlink": 1, "path": "/etc/systemd/system/etcd.service", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 9, "uid": 0, "wgrp": true, "woth": true, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/etc/systemd/system/etcd_container.service", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "lnk_target": "/dev/null", "woth": true, "device_type": 0, "mtime": 1536865742.1694283, "block_size": 4096, "inode": 531779, "isgid": false, "size": 9, "wgrp": true, "executable": false, "isuid": false, "readable": true, "isreg": false, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 0, "xoth": true, "islnk": true, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/etc/systemd/system/etcd_container.service", "xusr": true, "atime": 1547019661.4416518, "lnk_source": "/dev/null", "isdir": false, "ctime": 1536865742.1694283, "isblk": false, "xgrp": true, "dev": 64769, "roth": true, "isfifo": false, "mode": "0777", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => (item=/etc/systemd/system/etcd_container.service) => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/etc/systemd/system/etcd_container.service" } }, "item": "/etc/systemd/system/etcd_container.service", "stat": { "atime": 1547019661.4416518, "block_size": 4096, "blocks": 0, "ctime": 1536865742.1694283, "dev": 64769, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 531779, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": true, "isreg": false, "issock": false, "isuid": false, "lnk_source": "/dev/null", "lnk_target": "/dev/null", "mode": "0777", "mtime": 1536865742.1694283, "nlink": 1, "path": "/etc/systemd/system/etcd_container.service", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 9, "uid": 0, "wgrp": true, "woth": true, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [etcd : Remove old etcd service files] ********************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/upgrade_static.yml:19 Wednesday 09 January 2019 15:58:40 +0100 (0:00:00.602) 0:19:14.414 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => (item={'_ansible_parsed': True, u'stat': {u'uid': 0, u'exists': True, u'lnk_target': u'/dev/null', u'woth': True, u'device_type': 0, u'mtime': 1536865741.2514122, u'block_size': 4096, u'inode': 531778, u'isgid': False, u'size': 9, u'wgrp': True, u'executable': False, u'isuid': False, u'readable': True, u'isreg': False, u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True, u'writeable': True, u'blocks': 0, u'xoth': True, u'islnk': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'root', u'path': u'/etc/systemd/system/etcd.service', u'xusr': True, u'atime': 1547045415.3829203, u'lnk_source': u'/dev/null', u'isdir': False, u'ctime': 1536865741.2514122, u'isblk': False, u'xgrp': True, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0777', u'rusr': True}, u'changed': False, '_ansible_no_log': False, 'failed': False, '_ansible_item_result': True, 'item': u'/etc/systemd/system/etcd.service', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': False, u'follow': False, u'path': u'/etc/systemd/system/etcd.service', u'get_md5': None, u'get_mime': False, u'get_attributes': False}}, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/systemd/system/etcd.service'}) => { "changed": false, "item": { "changed": false, "failed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/etc/systemd/system/etcd.service" } }, "item": "/etc/systemd/system/etcd.service", "stat": { "atime": 1547045415.3829203, "block_size": 4096, "blocks": 0, "ctime": 1536865741.2514122, "dev": 64769, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 531778, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": true, "isreg": false, "issock": false, "isuid": false, "lnk_source": "/dev/null", "lnk_target": "/dev/null", "mode": "0777", "mtime": 1536865741.2514122, "nlink": 1, "path": "/etc/systemd/system/etcd.service", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 9, "uid": 0, "wgrp": true, "woth": true, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } }, "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item={'_ansible_parsed': True, u'stat': {u'uid': 0, u'exists': True, u'lnk_target': u'/dev/null', u'woth': True, u'device_type': 0, u'mtime': 1536865742.1694283, u'block_size': 4096, u'inode': 531779, u'isgid': False, u'size': 9, u'wgrp': True, u'executable': False, u'isuid': False, u'readable': True, u'isreg': False, u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True, u'writeable': True, u'blocks': 0, u'xoth': True, u'islnk': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'root', u'path': u'/etc/systemd/system/etcd_container.service', u'xusr': True, u'atime': 1547019661.4416518, u'lnk_source': u'/dev/null', u'isdir': False, u'ctime': 1536865742.1694283, u'isblk': False, u'xgrp': True, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0777', u'rusr': True}, u'changed': False, '_ansible_no_log': False, 'failed': False, '_ansible_item_result': True, 'item': u'/etc/systemd/system/etcd_container.service', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': False, u'follow': False, u'path': u'/etc/systemd/system/etcd_container.service', u'get_md5': None, u'get_mime': False, u'get_attributes': False}}, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/systemd/system/etcd_container.service'}) => { "changed": false, "item": { "changed": false, "failed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/etc/systemd/system/etcd_container.service" } }, "item": "/etc/systemd/system/etcd_container.service", "stat": { "atime": 1547019661.4416518, "block_size": 4096, "blocks": 0, "ctime": 1536865742.1694283, "dev": 64769, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 531779, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": true, "isreg": false, "issock": false, "isuid": false, "lnk_source": "/dev/null", "lnk_target": "/dev/null", "mode": "0777", "mtime": 1536865742.1694283, "nlink": 1, "path": "/etc/systemd/system/etcd_container.service", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 9, "uid": 0, "wgrp": true, "woth": true, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } }, "skip_reason": "Conditional result was False" } TASK [etcd : Stop, disable and mask old etcd service] *********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/upgrade_static.yml:31 Wednesday 09 January 2019 15:58:40 +0100 (0:00:00.150) 0:19:14.565 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"TimeoutStopUSec": "1min 30s", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ExecMainCode": "0", "UnitFileState": "bad", "ExecMainPID": "0", "LimitSIGPENDING": "63379", "FileDescriptorStoreMax": "0", "LoadState": "masked", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "0", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "0", "AllowIsolate": "no", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "PrivateTmp": "no", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "0", "SendSIGHUP": "no", "ExecMainStartTimestampMonotonic": "0", "SyslogPriority": "30", "SameProcessGroup": "no", "LimitNPROC": "63379", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "0", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "TasksCurrent": "18446744073709551615", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "dead", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "MainPID": "0", "StartupBlockIOWeight": "18446744073709551615", "FragmentPath": "/dev/null", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "inactive", "Nice": "0", "LimitDATA": "18446744073709551615", "MemoryCurrent": "18446744073709551615", "LimitRTTIME": "18446744073709551615", "SecureBits": "0", "RestartUSec": "100ms", "Transient": "no", "CPUAccounting": "yes", "RemainAfterExit": "no", "PrivateNetwork": "no", "Restart": "no", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "StartLimitBurst": "5", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "StandardInput": "null", "AssertTimestampMonotonic": "0", "DefaultDependencies": "yes", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "none", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "NoNewPrivileges": "no", "OnFailureJobMode": "replace", "AssertResult": "no", "LimitLOCKS": "18446744073709551615", "TimeoutStartUSec": "1min 30s", "RefuseManualStop": "no", "LimitNICE": "0", "FailureAction": "none", "CanIsolate": "no", "StandardOutput": "inherit", "MountFlags": "0", "InactiveEnterTimestampMonotonic": "0", "StandardError": "inherit", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "IOScheduling": "0", "Description": "etcd.service", "ActiveExitTimestampMonotonic": "0", "CanReload": "no", "PrivateDevices": "no", "BlockIOWeight": "18446744073709551615", "Names": "etcd.service", "ProtectSystem": "no", "ControlPID": "0", "Id": "etcd.service"}, "name": "etcd", "changed": false, "enabled": false, "state": "stopped", "invocation": {"module_args": {"no_block": false, "force": null, "name": "etcd", "enabled": false, "daemon_reload": true, "state": "stopped", "user": false, "masked": true}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => (item=etcd) => { "changed": false, "enabled": false, "failed_when_result": false, "invocation": { "module_args": { "daemon_reload": true, "enabled": false, "force": null, "masked": true, "name": "etcd", "no_block": false, "state": "stopped", "user": false } }, "item": "etcd", "name": "etcd", "state": "stopped", "status": { "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "etcd.service", "DevicePolicy": "auto", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/dev/null", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "etcd.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "63379", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "63379", "LimitSTACK": "18446744073709551615", "LoadState": "masked", "MainPID": "0", "MemoryAccounting": "yes", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "etcd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Restart": "no", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "inherit", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "UMask": "0022", "UnitFileState": "bad", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"status": {"TimeoutStopUSec": "1min 30s", "RuntimeDirectoryMode": "0755", "GuessMainPID": "yes", "ExecMainCode": "0", "UnitFileState": "bad", "ExecMainPID": "0", "LimitSIGPENDING": "63379", "FileDescriptorStoreMax": "0", "LoadState": "masked", "ProtectHome": "no", "TTYVTDisallocate": "no", "StartLimitInterval": "10000000", "WatchdogTimestampMonotonic": "0", "LimitSTACK": "18446744073709551615", "ActiveEnterTimestampMonotonic": "0", "AllowIsolate": "no", "IgnoreOnSnapshot": "no", "StartLimitAction": "none", "CPUSchedulingPriority": "0", "KillSignal": "15", "LimitFSIZE": "18446744073709551615", "IgnoreOnIsolate": "no", "LimitCPU": "18446744073709551615", "MemoryLimit": "18446744073709551615", "CanStart": "yes", "JobTimeoutAction": "none", "PrivateTmp": "no", "LimitAS": "18446744073709551615", "RootDirectoryStartOnly": "no", "InactiveExitTimestampMonotonic": "0", "SendSIGHUP": "no", "ExecMainStartTimestampMonotonic": "0", "SyslogPriority": "30", "SameProcessGroup": "no", "LimitNPROC": "63379", "UMask": "0022", "NonBlocking": "no", "DevicePolicy": "auto", "CapabilityBoundingSet": "18446744073709551615", "TTYReset": "no", "OOMScoreAdjust": "0", "RefuseManualStart": "no", "KillMode": "control-group", "SyslogLevelPrefix": "yes", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "Delegate": "no", "TasksCurrent": "18446744073709551615", "LimitCORE": "18446744073709551615", "JobTimeoutUSec": "0", "TimerSlackNSec": "50000", "SubState": "dead", "CPUSchedulingResetOnFork": "no", "Result": "success", "CPUShares": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "MainPID": "0", "StartupBlockIOWeight": "18446744073709551615", "FragmentPath": "/dev/null", "StartupCPUShares": "18446744073709551615", "WatchdogUSec": "0", "ActiveState": "inactive", "Nice": "0", "LimitDATA": "18446744073709551615", "MemoryCurrent": "18446744073709551615", "LimitRTTIME": "18446744073709551615", "SecureBits": "0", "RestartUSec": "100ms", "Transient": "no", "CPUAccounting": "yes", "RemainAfterExit": "no", "PrivateNetwork": "no", "Restart": "no", "CPUSchedulingPolicy": "0", "LimitNOFILE": "65536", "SendSIGKILL": "yes", "StatusErrno": "0", "StartLimitBurst": "5", "SystemCallErrorNumber": "0", "TasksAccounting": "no", "NeedDaemonReload": "no", "TTYVHangup": "no", "StandardInput": "null", "AssertTimestampMonotonic": "0", "DefaultDependencies": "yes", "TasksMax": "18446744073709551615", "CPUQuotaPerSecUSec": "infinity", "ExecMainStatus": "0", "LimitMEMLOCK": "65536", "StopWhenUnneeded": "no", "LimitMSGQUEUE": "819200", "AmbientCapabilities": "0", "ExecMainExitTimestampMonotonic": "0", "NotifyAccess": "none", "PermissionsStartOnly": "no", "BlockIOAccounting": "yes", "CanStop": "yes", "NoNewPrivileges": "no", "OnFailureJobMode": "replace", "AssertResult": "no", "LimitLOCKS": "18446744073709551615", "TimeoutStartUSec": "1min 30s", "RefuseManualStop": "no", "LimitNICE": "0", "FailureAction": "none", "CanIsolate": "no", "StandardOutput": "inherit", "MountFlags": "0", "InactiveEnterTimestampMonotonic": "0", "StandardError": "inherit", "MemoryAccounting": "yes", "IgnoreSIGPIPE": "yes", "IOScheduling": "0", "Description": "etcd_container.service", "ActiveExitTimestampMonotonic": "0", "CanReload": "no", "PrivateDevices": "no", "BlockIOWeight": "18446744073709551615", "Names": "etcd_container.service", "ProtectSystem": "no", "ControlPID": "0", "Id": "etcd_container.service"}, "name": "etcd_container", "changed": false, "enabled": false, "state": "stopped", "invocation": {"module_args": {"no_block": false, "force": null, "name": "etcd_container", "enabled": false, "daemon_reload": true, "state": "stopped", "user": false, "masked": true}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => (item=etcd_container) => { "changed": false, "enabled": false, "failed_when_result": false, "invocation": { "module_args": { "daemon_reload": true, "enabled": false, "force": null, "masked": true, "name": "etcd_container", "no_block": false, "state": "stopped", "user": false } }, "item": "etcd_container", "name": "etcd_container", "state": "stopped", "status": { "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "BlockIOAccounting": "yes", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "yes", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "etcd_container.service", "DevicePolicy": "auto", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/dev/null", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "etcd_container.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "65536", "LimitNPROC": "63379", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "63379", "LimitSTACK": "18446744073709551615", "LoadState": "masked", "MainPID": "0", "MemoryAccounting": "yes", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "etcd_container.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Restart": "no", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "inherit", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "UMask": "0022", "UnitFileState": "bad", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [etcd : Remove nonexistent services] *********************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/upgrade_static.yml:43 Wednesday 09 January 2019 15:58:41 +0100 (0:00:00.906) 0:19:15.471 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:41.416770", "stdout": "", "cmd": ["systemctl", "reset-failed"], "rc": 0, "start": "2019-01-09 15:58:41.411866", "stderr": "", "delta": "0:00:00.004904", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "systemctl reset-failed", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "systemctl", "reset-failed" ], "delta": "0:00:00.004904", "end": "2019-01-09 15:58:41.416770", "invocation": { "module_args": { "_raw_params": "systemctl reset-failed", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:41.411866", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } TASK [etcd : set etcd host and ip facts] ************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/set_facts.yml:2 Wednesday 09 January 2019 15:58:41 +0100 (0:00:00.281) 0:19:15.753 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "etcd_hostname": "sp-os-master01.os.ad.scanplus.de", "etcd_ip": "172.30.80.240", "etcdctlv2": "/usr/local/bin/master-exec etcd etcd etcdctl --cert-file /etc/etcd/peer.crt --key-file /etc/etcd/peer.key --ca-file /etc/etcd/ca.crt --endpoints https://sp-os-master01.os.ad.scanplus.de:2379" }, "changed": false } TASK [etcd : Check that etcd image is present] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml:5 Wednesday 09 January 2019 15:58:41 +0100 (0:00:00.152) 0:19:15.906 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:41.892819", "stdout": "635bb36d7fc7", "cmd": ["docker", "images", "-q", "registry.redhat.io/rhel7/etcd:3.2.22"], "rc": 0, "start": "2019-01-09 15:58:41.852418", "stderr": "", "delta": "0:00:00.040401", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "docker images -q registry.redhat.io/rhel7/etcd:3.2.22", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "docker", "images", "-q", "registry.redhat.io/rhel7/etcd:3.2.22" ], "delta": "0:00:00.040401", "end": "2019-01-09 15:58:41.892819", "invocation": { "module_args": { "_raw_params": "docker images -q registry.redhat.io/rhel7/etcd:3.2.22", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:41.852418", "stderr": "", "stderr_lines": [], "stdout": "635bb36d7fc7", "stdout_lines": [ "635bb36d7fc7" ] } TASK [etcd : Pre-pull etcd image] ******************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml:9 Wednesday 09 January 2019 15:58:42 +0100 (0:00:00.329) 0:19:16.235 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : Configure etcd profile.d aliases] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/drop_etcdctl.yml:2 Wednesday 09 January 2019 15:58:42 +0100 (0:00:00.112) 0:19:16.347 ***** ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547045922.17-140978324547988 `" && echo ansible-tmp-1547045922.17-140978324547988="` echo /root/.ansible/tmp/ansible-tmp-1547045922.17-140978324547988 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547045922.17-140978324547988=/root/.ansible/tmp/ansible-tmp-1547045922.17-140978324547988\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/etc/profile.d/etcdctl.sh", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "us-ascii", "uid": 0, "exists": true, "attr_flags": "e", "woth": false, "isreg": true, "device_type": 0, "mtime": 1547019657.9985855, "block_size": 4096, "inode": 1180426, "isgid": false, "size": 833, "executable": true, "isuid": false, "readable": true, "version": "18446744073061000206", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "mimetype": "text/x-shellscript", "blocks": 8, "xoth": true, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/etc/profile.d/etcdctl.sh", "xusr": true, "atime": 1547019658.2075894, "isdir": false, "ctime": 1547019658.1705887, "isblk": false, "wgrp": false, "checksum": "67725f6a8671eecd798de52ad1df45a4b61883c7", "dev": 64769, "roth": true, "isfifo": false, "mode": "0755", "xgrp": true, "rusr": true, "attributes": ["extents"]}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"group": "root", "uid": 0, "changed": false, "owner": "root", "state": "file", "gid": 0, "secontext": "system_u:object_r:bin_t:s0", "mode": "0755", "path": "/etc/profile.d/etcdctl.sh", "invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": "etcdctl.sh.j2", "path": "/etc/profile.d/etcdctl.sh", "owner": "root", "follow": false, "group": "root", "unsafe_writes": null, "serole": null, "content": null, "state": "file", "setype": null, "dest": "/etc/profile.d/etcdctl.sh", "selevel": null, "regexp": null, "src": null, "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": 493, "attributes": null, "backup": null}}, "diff": {"after": {"path": "/etc/profile.d/etcdctl.sh"}, "before": {"path": "/etc/profile.d/etcdctl.sh"}}, "size": 833}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547045922.17-140978324547988/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "checksum": "67725f6a8671eecd798de52ad1df45a4b61883c7", "dest": "/etc/profile.d/etcdctl.sh", "diff": { "after": { "path": "/etc/profile.d/etcdctl.sh" }, "before": { "path": "/etc/profile.d/etcdctl.sh" } }, "gid": 0, "group": "root", "invocation": { "module_args": { "_diff_peek": null, "_original_basename": "etcdctl.sh.j2", "attributes": null, "backup": null, "content": null, "delimiter": null, "dest": "/etc/profile.d/etcdctl.sh", "directory_mode": null, "follow": false, "force": false, "group": "root", "mode": 493, "owner": "root", "path": "/etc/profile.d/etcdctl.sh", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "file", "unsafe_writes": null } }, "mode": "0755", "owner": "root", "path": "/etc/profile.d/etcdctl.sh", "secontext": "system_u:object_r:bin_t:s0", "size": 833, "state": "file", "uid": 0 } TASK [etcd : Add iptables allow rules] ************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/firewall.yml:4 Wednesday 09 January 2019 15:58:42 +0100 (0:00:00.500) 0:19:16.848 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/os_firewall_manage_iptables.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"protocol": "tcp", "name": "etcd", "chain": "OS_FIREWALL_ALLOW", "create_jump_rule": true, "action": "add", "ip_version": "ipv4", "jump_rule_chain": "INPUT", "port": "2379"}}, "output": [], "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => (item={u'port': u'2379/tcp', u'service': u'etcd'}) => { "changed": false, "invocation": { "module_args": { "action": "add", "chain": "OS_FIREWALL_ALLOW", "create_jump_rule": true, "ip_version": "ipv4", "jump_rule_chain": "INPUT", "name": "etcd", "port": "2379", "protocol": "tcp" } }, "item": { "port": "2379/tcp", "service": "etcd" }, "output": [] } Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/os_firewall_manage_iptables.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"protocol": "tcp", "name": "etcd peering", "chain": "OS_FIREWALL_ALLOW", "create_jump_rule": true, "action": "add", "ip_version": "ipv4", "jump_rule_chain": "INPUT", "port": "2380"}}, "output": [], "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => (item={u'port': u'2380/tcp', u'service': u'etcd peering'}) => { "changed": false, "invocation": { "module_args": { "action": "add", "chain": "OS_FIREWALL_ALLOW", "create_jump_rule": true, "ip_version": "ipv4", "jump_rule_chain": "INPUT", "name": "etcd peering", "port": "2380", "protocol": "tcp" } }, "item": { "port": "2380/tcp", "service": "etcd peering" }, "output": [] } TASK [etcd : Remove iptables rules] ***************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/firewall.yml:13 Wednesday 09 January 2019 15:58:43 +0100 (0:00:00.600) 0:19:17.449 ***** TASK [etcd : Add firewalld allow rules] ************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/firewall.yml:24 Wednesday 09 January 2019 15:58:43 +0100 (0:00:00.104) 0:19:17.553 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => (item={u'port': u'2379/tcp', u'service': u'etcd'}) => { "changed": false, "item": { "port": "2379/tcp", "service": "etcd" }, "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item={u'port': u'2380/tcp', u'service': u'etcd peering'}) => { "changed": false, "item": { "port": "2380/tcp", "service": "etcd peering" }, "skip_reason": "Conditional result was False" } TASK [etcd : Remove firewalld allow rules] ********************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/firewall.yml:33 Wednesday 09 January 2019 15:58:43 +0100 (0:00:00.156) 0:19:17.710 ***** TASK [etcd : Ensure etcd datadir exists] ************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml:25 Wednesday 09 January 2019 15:58:43 +0100 (0:00:00.105) 0:19:17.816 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"group": "etcd", "uid": 996, "changed": false, "owner": "etcd", "state": "directory", "gid": 993, "secontext": "system_u:object_r:container_file_t:s0", "mode": "0700", "path": "/var/lib/etcd/", "invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": null, "path": "/var/lib/etcd/", "owner": null, "follow": true, "group": null, "unsafe_writes": null, "state": "directory", "content": null, "serole": null, "setype": null, "selevel": null, "regexp": null, "src": null, "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": 448, "attributes": null, "backup": null}}, "diff": {"after": {"path": "/var/lib/etcd/"}, "before": {"path": "/var/lib/etcd/"}}, "size": 4096}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "diff": { "after": { "path": "/var/lib/etcd/" }, "before": { "path": "/var/lib/etcd/" } }, "gid": 993, "group": "etcd", "invocation": { "module_args": { "_diff_peek": null, "_original_basename": null, "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "follow": true, "force": false, "group": null, "mode": 448, "owner": null, "path": "/var/lib/etcd/", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "directory", "unsafe_writes": null } }, "mode": "0700", "owner": "etcd", "path": "/var/lib/etcd/", "secontext": "system_u:object_r:container_file_t:s0", "size": 4096, "state": "directory", "uid": 996 } TASK [etcd : Validate permissions on the config dir] ************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml:31 Wednesday 09 January 2019 15:58:43 +0100 (0:00:00.309) 0:19:18.125 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"group": "root", "uid": 0, "changed": false, "owner": "root", "state": "directory", "gid": 0, "secontext": "system_u:object_r:etc_t:s0", "mode": "0700", "path": "/etc/etcd", "invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": null, "path": "/etc/etcd", "owner": null, "follow": true, "group": null, "unsafe_writes": null, "state": "directory", "content": null, "serole": null, "setype": null, "selevel": null, "regexp": null, "src": null, "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": 448, "attributes": null, "backup": null}}, "diff": {"after": {"path": "/etc/etcd"}, "before": {"path": "/etc/etcd"}}, "size": 4096}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "diff": { "after": { "path": "/etc/etcd" }, "before": { "path": "/etc/etcd" } }, "gid": 0, "group": "root", "invocation": { "module_args": { "_diff_peek": null, "_original_basename": null, "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "follow": true, "force": false, "group": null, "mode": 448, "owner": null, "path": "/etc/etcd", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "directory", "unsafe_writes": null } }, "mode": "0700", "owner": "root", "path": "/etc/etcd", "secontext": "system_u:object_r:etc_t:s0", "size": 4096, "state": "directory", "uid": 0 } TASK [etcd : Validate permissions on the static pods dir] ******************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml:37 Wednesday 09 January 2019 15:58:44 +0100 (0:00:00.310) 0:19:18.436 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"group": "root", "uid": 0, "changed": true, "owner": "root", "state": "directory", "gid": 0, "secontext": "unconfined_u:object_r:etc_t:s0", "mode": "0700", "path": "/etc/origin/node/pods/", "invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": null, "path": "/etc/origin/node/pods/", "owner": "root", "follow": true, "group": "root", "unsafe_writes": null, "state": "directory", "content": null, "serole": null, "setype": null, "selevel": null, "regexp": null, "src": null, "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": 448, "attributes": null, "backup": null}}, "diff": {"after": {"path": "/etc/origin/node/pods/", "mode": "0700"}, "before": {"path": "/etc/origin/node/pods/", "mode": "0755"}}, "size": 4096}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "diff": { "after": { "mode": "0700", "path": "/etc/origin/node/pods/" }, "before": { "mode": "0755", "path": "/etc/origin/node/pods/" } }, "gid": 0, "group": "root", "invocation": { "module_args": { "_diff_peek": null, "_original_basename": null, "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "follow": true, "force": false, "group": "root", "mode": 448, "owner": "root", "path": "/etc/origin/node/pods/", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "directory", "unsafe_writes": null } }, "mode": "0700", "owner": "root", "path": "/etc/origin/node/pods/", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 4096, "state": "directory", "uid": 0 } TASK [etcd : Write etcd global config file] ********************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml:45 Wednesday 09 January 2019 15:58:44 +0100 (0:00:00.297) 0:19:18.733 ***** ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547045924.55-106536139562314 `" && echo ansible-tmp-1547045924.55-106536139562314="` echo /root/.ansible/tmp/ansible-tmp-1547045924.55-106536139562314 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547045924.55-106536139562314=/root/.ansible/tmp/ansible-tmp-1547045924.55-106536139562314\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/etc/etcd/etcd.conf", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "us-ascii", "uid": 0, "exists": true, "attr_flags": "e", "woth": false, "isreg": true, "device_type": 0, "mtime": 1536865752.9936197, "block_size": 4096, "inode": 1308245, "isgid": false, "size": 1408, "executable": false, "isuid": false, "readable": true, "version": "1173833597", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "mimetype": "text/plain", "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/etc/etcd/etcd.conf", "xusr": false, "atime": 1547017781.2293537, "isdir": false, "ctime": 1536865753.3266256, "isblk": false, "wgrp": false, "checksum": "929cda43f0a0a784e49d8f0d6d1ecf93ca196adf", "dev": 64769, "roth": true, "isfifo": false, "mode": "0644", "xgrp": false, "rusr": true, "attributes": ["extents"]}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"group": "root", "uid": 0, "changed": false, "owner": "root", "state": "file", "gid": 0, "secontext": "system_u:object_r:etc_t:s0", "mode": "0644", "path": "/etc/etcd/etcd.conf", "invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": "etcd.conf.j2", "path": "/etc/etcd/etcd.conf", "owner": null, "follow": false, "group": null, "unsafe_writes": null, "state": "file", "content": null, "serole": null, "setype": null, "dest": "/etc/etcd/etcd.conf", "selevel": null, "regexp": null, "src": null, "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "diff": {"after": {"path": "/etc/etcd/etcd.conf"}, "before": {"path": "/etc/etcd/etcd.conf"}}, "size": 1408}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547045924.55-106536139562314/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "checksum": "929cda43f0a0a784e49d8f0d6d1ecf93ca196adf", "dest": "/etc/etcd/etcd.conf", "diff": { "after": { "path": "/etc/etcd/etcd.conf" }, "before": { "path": "/etc/etcd/etcd.conf" } }, "gid": 0, "group": "root", "invocation": { "module_args": { "_diff_peek": null, "_original_basename": "etcd.conf.j2", "attributes": null, "backup": null, "content": null, "delimiter": null, "dest": "/etc/etcd/etcd.conf", "directory_mode": null, "follow": false, "force": false, "group": null, "mode": null, "owner": null, "path": "/etc/etcd/etcd.conf", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "file", "unsafe_writes": null } }, "mode": "0644", "owner": "root", "path": "/etc/etcd/etcd.conf", "secontext": "system_u:object_r:etc_t:s0", "size": 1408, "state": "file", "uid": 0 } TASK [etcd : Create temp directory for static pods] ************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml:51 Wednesday 09 January 2019 15:58:45 +0100 (0:00:00.553) 0:19:19.287 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:45.228927", "stdout": "/tmp/openshift-ansible-4IKRly", "cmd": ["mktemp", "-d", "/tmp/openshift-ansible-XXXXXX"], "rc": 0, "start": "2019-01-09 15:58:45.225811", "stderr": "", "delta": "0:00:00.003116", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "mktemp -d /tmp/openshift-ansible-XXXXXX", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-ansible-XXXXXX" ], "delta": "0:00:00.003116", "end": "2019-01-09 15:58:45.228927", "invocation": { "module_args": { "_raw_params": "mktemp -d /tmp/openshift-ansible-XXXXXX", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:45.225811", "stderr": "", "stderr_lines": [], "stdout": "/tmp/openshift-ansible-4IKRly", "stdout_lines": [ "/tmp/openshift-ansible-4IKRly" ] } TASK [etcd : Prepare etcd static pod] *************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml:56 Wednesday 09 January 2019 15:58:45 +0100 (0:00:00.275) 0:19:19.562 ***** ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547045925.39-80555479723103 `" && echo ansible-tmp-1547045925.39-80555479723103="` echo /root/.ansible/tmp/ansible-tmp-1547045925.39-80555479723103 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547045925.39-80555479723103=/root/.ansible/tmp/ansible-tmp-1547045925.39-80555479723103\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/tmp/openshift-ansible-4IKRly", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "binary", "uid": 0, "exists": true, "attr_flags": "e", "woth": false, "isreg": false, "device_type": 0, "mtime": 1547045925.2277071, "block_size": 4096, "inode": 660254, "isgid": false, "size": 4096, "executable": true, "isuid": false, "readable": true, "version": "1807979975", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "mimetype": "inode/directory", "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/tmp/openshift-ansible-4IKRly", "xusr": true, "atime": 1547045925.2277071, "isdir": true, "ctime": 1547045925.2277071, "isblk": false, "wgrp": false, "xgrp": false, "dev": 64769, "roth": false, "isfifo": false, "mode": "0700", "rusr": true, "attributes": ["extents"]}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/tmp/openshift-ansible-4IKRly/etcd.yaml", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\n', '') PUT /usr/share/ansible/openshift-ansible/roles/etcd/files/etcd.yaml TO /root/.ansible/tmp/ansible-tmp-1547045925.39-80555479723103/source SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r '[sp-os-master01.os.ad.scanplus.de]' (0, 'sftp> put /usr/share/ansible/openshift-ansible/roles/etcd/files/etcd.yaml /root/.ansible/tmp/ansible-tmp-1547045925.39-80555479723103/source\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1547045925.39-80555479723103/ /root/.ansible/tmp/ansible-tmp-1547045925.39-80555479723103/source && sleep 0'"'"'' (0, '', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"src": "/root/.ansible/tmp/ansible-tmp-1547045925.39-80555479723103/source", "changed": true, "group": "root", "uid": 0, "dest": "/tmp/openshift-ansible-4IKRly/etcd.yaml", "checksum": "2ba8054cf248bf1093f686a7b459e850e8cd838b", "md5sum": "c09816fa24eefdb7aa68934db25fd9aa", "owner": "root", "state": "file", "gid": 0, "secontext": "unconfined_u:object_r:admin_home_t:s0", "mode": "0600", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "etcd.yaml", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "dest": "/tmp/openshift-ansible-4IKRly/etcd.yaml", "selevel": null, "regexp": null, "validate": null, "src": "/root/.ansible/tmp/ansible-tmp-1547045925.39-80555479723103/source", "checksum": "2ba8054cf248bf1093f686a7b459e850e8cd838b", "seuser": null, "delimiter": null, "mode": 384, "attributes": null, "backup": false}}, "size": 945}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547045925.39-80555479723103/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=etcd.yaml) => { "changed": true, "checksum": "2ba8054cf248bf1093f686a7b459e850e8cd838b", "dest": "/tmp/openshift-ansible-4IKRly/etcd.yaml", "diff": [], "gid": 0, "group": "root", "invocation": { "module_args": { "_original_basename": "etcd.yaml", "attributes": null, "backup": false, "checksum": "2ba8054cf248bf1093f686a7b459e850e8cd838b", "content": null, "delimiter": null, "dest": "/tmp/openshift-ansible-4IKRly/etcd.yaml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": 384, "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/root/.ansible/tmp/ansible-tmp-1547045925.39-80555479723103/source", "unsafe_writes": null, "validate": null } }, "item": "etcd.yaml", "md5sum": "c09816fa24eefdb7aa68934db25fd9aa", "mode": "0600", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 945, "src": "/root/.ansible/tmp/ansible-tmp-1547045925.39-80555479723103/source", "state": "file", "uid": 0 } TASK [etcd : Update etcd static pod] **************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml:64 Wednesday 09 January 2019 15:58:46 +0100 (0:00:00.700) 0:19:20.262 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "", "src": "/tmp/openshift-ansible-4IKRly/etcd.yaml", "backup": false, "update": false, "value": null, "backup_ext": ".20190109T155846", "curr_value_format": "yaml", "edits": [{"value": "registry.redhat.io/rhel7/etcd:3.2.22", "key": "spec.containers[0].image"}], "state": "present", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "present", "changed": true, "result": [{"edit": {"kind": "Pod", "spec": {"priorityClassName": "system-node-critical", "restartPolicy": "Always", "hostNetwork": true, "containers": [{"livenessProbe": {"initialDelaySeconds": 45, "exec": null}, "securityContext": {"privileged": true}, "name": "etcd", "workingDir": "/var/lib/etcd", "image": "registry.redhat.io/rhel7/etcd:3.2.22", "args": ["#!/bin/sh\\nset -o allexport\\nsource /etc/etcd/etcd.conf\\nexec etcd\\n"], "volumeMounts": [{"readOnly": true, "mountPath": "/etc/etcd/", "name": "master-config"}, {"mountPath": "/var/lib/etcd/", "name": "master-data"}], "command": ["/bin/sh", "-c"]}], "volumes": [{"hostPath": {"path": "/etc/etcd/"}, "name": "master-config"}, {"hostPath": {"path": "/var/lib/etcd"}, "name": "master-data"}]}, "apiVersion": "v1", "metadata": {"labels": {"openshift.io/control-plane": "true", "openshift.io/component": "etcd"}, "namespace": "kube-system", "name": "master-etcd", "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}, "key": "spec.containers[0].image"}]}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=etcd.yaml) => { "changed": true, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T155846", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": [ { "key": "spec.containers[0].image", "value": "registry.redhat.io/rhel7/etcd:3.2.22" } ], "index": null, "key": "", "separator": ".", "src": "/tmp/openshift-ansible-4IKRly/etcd.yaml", "state": "present", "update": false, "value": null, "value_type": "" } }, "item": "etcd.yaml", "result": [ { "edit": { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "labels": { "openshift.io/component": "etcd", "openshift.io/control-plane": "true" }, "name": "master-etcd", "namespace": "kube-system" }, "spec": { "containers": [ { "args": [ "#!/bin/sh\nset -o allexport\nsource /etc/etcd/etcd.conf\nexec etcd\n" ], "command": [ "/bin/sh", "-c" ], "image": "registry.redhat.io/rhel7/etcd:3.2.22", "livenessProbe": { "exec": null, "initialDelaySeconds": 45 }, "name": "etcd", "securityContext": { "privileged": true }, "volumeMounts": [ { "mountPath": "/etc/etcd/", "name": "master-config", "readOnly": true }, { "mountPath": "/var/lib/etcd/", "name": "master-data" } ], "workingDir": "/var/lib/etcd" } ], "hostNetwork": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "volumes": [ { "hostPath": { "path": "/etc/etcd/" }, "name": "master-config" }, { "hostPath": { "path": "/var/lib/etcd" }, "name": "master-data" } ] } }, "key": "spec.containers[0].image" } ], "state": "present" } TASK [etcd : Set etcd host as a probe target host] ************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml:73 Wednesday 09 January 2019 15:58:46 +0100 (0:00:00.331) 0:19:20.594 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "", "src": "/tmp/openshift-ansible-4IKRly/etcd.yaml", "backup": false, "update": false, "value": null, "backup_ext": ".20190109T155846", "curr_value_format": "yaml", "edits": [{"value": ["etcdctl", "--cert-file", "/etc/etcd/peer.crt", "--key-file", "/etc/etcd/peer.key", "--ca-file", "/etc/etcd/ca.crt", "--endpoints", "https://172.30.80.240:2379", "cluster-health"], "key": "spec.containers[0].livenessProbe.exec.command"}], "state": "present", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "present", "changed": true, "result": [{"edit": {"kind": "Pod", "spec": {"priorityClassName": "system-node-critical", "restartPolicy": "Always", "hostNetwork": true, "containers": [{"livenessProbe": {"initialDelaySeconds": 45, "exec": {"command": ["etcdctl", "--cert-file", "/etc/etcd/peer.crt", "--key-file", "/etc/etcd/peer.key", "--ca-file", "/etc/etcd/ca.crt", "--endpoints", "https://172.30.80.240:2379", "cluster-health"]}}, "securityContext": {"privileged": true}, "name": "etcd", "volumeMounts": [{"readOnly": true, "mountPath": "/etc/etcd/", "name": "master-config"}, {"mountPath": "/var/lib/etcd/", "name": "master-data"}], "image": "registry.redhat.io/rhel7/etcd:3.2.22", "args": ["#!/bin/sh\\nset -o allexport\\nsource /etc/etcd/etcd.conf\\nexec etcd\\n"], "workingDir": "/var/lib/etcd", "command": ["/bin/sh", "-c"]}], "volumes": [{"hostPath": {"path": "/etc/etcd/"}, "name": "master-config"}, {"hostPath": {"path": "/var/lib/etcd"}, "name": "master-data"}]}, "apiVersion": "v1", "metadata": {"labels": {"openshift.io/control-plane": "true", "openshift.io/component": "etcd"}, "namespace": "kube-system", "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}, "name": "master-etcd"}}, "key": "spec.containers[0].livenessProbe.exec.command"}]}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=etcd.yaml) => { "changed": true, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T155846", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": [ { "key": "spec.containers[0].livenessProbe.exec.command", "value": [ "etcdctl", "--cert-file", "/etc/etcd/peer.crt", "--key-file", "/etc/etcd/peer.key", "--ca-file", "/etc/etcd/ca.crt", "--endpoints", "https://172.30.80.240:2379", "cluster-health" ] } ], "index": null, "key": "", "separator": ".", "src": "/tmp/openshift-ansible-4IKRly/etcd.yaml", "state": "present", "update": false, "value": null, "value_type": "" } }, "item": "etcd.yaml", "result": [ { "edit": { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "labels": { "openshift.io/component": "etcd", "openshift.io/control-plane": "true" }, "name": "master-etcd", "namespace": "kube-system" }, "spec": { "containers": [ { "args": [ "#!/bin/sh\nset -o allexport\nsource /etc/etcd/etcd.conf\nexec etcd\n" ], "command": [ "/bin/sh", "-c" ], "image": "registry.redhat.io/rhel7/etcd:3.2.22", "livenessProbe": { "exec": { "command": [ "etcdctl", "--cert-file", "/etc/etcd/peer.crt", "--key-file", "/etc/etcd/peer.key", "--ca-file", "/etc/etcd/ca.crt", "--endpoints", "https://172.30.80.240:2379", "cluster-health" ] }, "initialDelaySeconds": 45 }, "name": "etcd", "securityContext": { "privileged": true }, "volumeMounts": [ { "mountPath": "/etc/etcd/", "name": "master-config", "readOnly": true }, { "mountPath": "/var/lib/etcd/", "name": "master-data" } ], "workingDir": "/var/lib/etcd" } ], "hostNetwork": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "volumes": [ { "hostPath": { "path": "/etc/etcd/" }, "name": "master-config" }, { "hostPath": { "path": "/var/lib/etcd" }, "name": "master-data" } ] } }, "key": "spec.containers[0].livenessProbe.exec.command" } ], "state": "present" } TASK [etcd : Deploy etcd static pod] **************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml:92 Wednesday 09 January 2019 15:58:46 +0100 (0:00:00.506) 0:19:21.100 ***** ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547045927.08-229159002021226 `" && echo ansible-tmp-1547045927.08-229159002021226="` echo /root/.ansible/tmp/ansible-tmp-1547045927.08-229159002021226 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547045927.08-229159002021226=/root/.ansible/tmp/ansible-tmp-1547045927.08-229159002021226\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"src": "/tmp/openshift-ansible-4IKRly/etcd.yaml", "changed": false, "group": "root", "uid": 0, "dest": "/etc/origin/node/pods/etcd.yaml", "checksum": "a270af3a0d0cb5c6c0a797fdc7c6bc6977310bc6", "md5sum": "aeace897a875352a578dbfc0ffd7605e", "owner": "root", "state": "file", "gid": 0, "secontext": "system_u:object_r:etc_t:s0", "mode": "0600", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": true, "_original_basename": null, "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "dest": "/etc/origin/node/pods/etcd.yaml", "selevel": null, "regexp": null, "validate": null, "src": "/tmp/openshift-ansible-4IKRly/etcd.yaml", "checksum": null, "seuser": null, "delimiter": null, "mode": 384, "attributes": null, "backup": false}}, "size": 1229}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => (item=etcd.yaml) => { "changed": false, "checksum": "a270af3a0d0cb5c6c0a797fdc7c6bc6977310bc6", "dest": "/etc/origin/node/pods/etcd.yaml", "gid": 0, "group": "root", "invocation": { "module_args": { "_original_basename": null, "attributes": null, "backup": false, "checksum": null, "content": null, "delimiter": null, "dest": "/etc/origin/node/pods/etcd.yaml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": 384, "owner": null, "regexp": null, "remote_src": true, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/tmp/openshift-ansible-4IKRly/etcd.yaml", "unsafe_writes": null, "validate": null } }, "item": "etcd.yaml", "md5sum": "aeace897a875352a578dbfc0ffd7605e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1229, "src": "/tmp/openshift-ansible-4IKRly/etcd.yaml", "state": "file", "uid": 0 } TASK [etcd : Remove temp directory] ***************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml:101 Wednesday 09 January 2019 15:58:47 +0100 (0:00:00.631) 0:19:21.732 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": null, "path": "/tmp/openshift-ansible-4IKRly", "owner": null, "follow": true, "group": null, "unsafe_writes": null, "state": "absent", "content": null, "serole": null, "setype": null, "selevel": null, "regexp": null, "src": null, "name": "/tmp/openshift-ansible-4IKRly", "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "path": "/tmp/openshift-ansible-4IKRly", "state": "absent", "changed": true, "diff": {"after": {"path": "/tmp/openshift-ansible-4IKRly", "state": "absent"}, "before": {"path": "/tmp/openshift-ansible-4IKRly", "state": "directory"}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "diff": { "after": { "path": "/tmp/openshift-ansible-4IKRly", "state": "absent" }, "before": { "path": "/tmp/openshift-ansible-4IKRly", "state": "directory" } }, "invocation": { "module_args": { "_diff_peek": null, "_original_basename": null, "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "follow": true, "force": false, "group": null, "mode": null, "name": "/tmp/openshift-ansible-4IKRly", "owner": null, "path": "/tmp/openshift-ansible-4IKRly", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "absent", "unsafe_writes": null } }, "path": "/tmp/openshift-ansible-4IKRly", "state": "absent" } TASK [etcd : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/upgrade_static.yml:49 Wednesday 09 January 2019 15:58:47 +0100 (0:00:00.287) 0:19:22.019 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "r_etcd_common_etcd_runtime": "static_pod" }, "changed": false } TASK [etcd : Verify cluster is healthy] ************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/verify_cluster_health.yml:2 Wednesday 09 January 2019 15:58:47 +0100 (0:00:00.144) 0:19:22.163 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:48.247907", "stdout": "member 2cffc34cde3715e2 is healthy: got healthy result from https://172.30.80.240:2379\\ncluster is healthy", "cmd": ["/usr/local/bin/master-exec", "etcd", "etcd", "etcdctl", "--cert-file", "/etc/etcd/peer.crt", "--key-file", "/etc/etcd/peer.key", "--ca-file", "/etc/etcd/ca.crt", "--endpoints", "https://sp-os-master01.os.ad.scanplus.de:2379", "cluster-health"], "rc": 0, "start": "2019-01-09 15:58:48.109387", "stderr": "", "delta": "0:00:00.138520", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "/usr/local/bin/master-exec etcd etcd etcdctl --cert-file /etc/etcd/peer.crt --key-file /etc/etcd/peer.key --ca-file /etc/etcd/ca.crt --endpoints https://sp-os-master01.os.ad.scanplus.de:2379 cluster-health", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": true, "cmd": [ "/usr/local/bin/master-exec", "etcd", "etcd", "etcdctl", "--cert-file", "/etc/etcd/peer.crt", "--key-file", "/etc/etcd/peer.key", "--ca-file", "/etc/etcd/ca.crt", "--endpoints", "https://sp-os-master01.os.ad.scanplus.de:2379", "cluster-health" ], "delta": "0:00:00.138520", "end": "2019-01-09 15:58:48.247907", "invocation": { "module_args": { "_raw_params": "/usr/local/bin/master-exec etcd etcd etcdctl --cert-file /etc/etcd/peer.crt --key-file /etc/etcd/peer.key --ca-file /etc/etcd/ca.crt --endpoints https://sp-os-master01.os.ad.scanplus.de:2379 cluster-health", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:48.109387", "stderr": "", "stderr_lines": [], "stdout": "member 2cffc34cde3715e2 is healthy: got healthy result from https://172.30.80.240:2379\ncluster is healthy", "stdout_lines": [ "member 2cffc34cde3715e2 is healthy: got healthy result from https://172.30.80.240:2379", "cluster is healthy" ] } META: ran handlers META: ran handlers PLAY [Backup etcd] ********************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [etcd : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup.yml:2 Wednesday 09 January 2019 15:58:48 +0100 (0:00:00.450) 0:19:22.614 ***** included: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml for sp-os-master01.os.ad.scanplus.de TASK [etcd : include_tasks] ************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:2 Wednesday 09 January 2019 15:58:48 +0100 (0:00:00.196) 0:19:22.810 ***** included: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/vars.yml for sp-os-master01.os.ad.scanplus.de TASK [etcd : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/vars.yml:5 Wednesday 09 January 2019 15:58:48 +0100 (0:00:00.205) 0:19:23.015 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "l_backup_dir_name": "openshift-backup-post-3.0-20190109155848" }, "changed": false } TASK [etcd : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/vars.yml:8 Wednesday 09 January 2019 15:58:48 +0100 (0:00:00.151) 0:19:23.167 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "l_etcd_incontainer_data_dir": "/var/lib/etcd/" }, "changed": false } TASK [etcd : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/vars.yml:11 Wednesday 09 January 2019 15:58:49 +0100 (0:00:00.163) 0:19:23.331 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "l_etcd_incontainer_backup_dir": "/var/lib/etcd//openshift-backup-post-3.0-20190109155848" }, "changed": false } TASK [etcd : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/vars.yml:14 Wednesday 09 January 2019 15:58:49 +0100 (0:00:00.173) 0:19:23.505 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "l_etcd_backup_dir": "/var/lib/etcd//openshift-backup-post-3.0-20190109155848" }, "changed": false } TASK [etcd : Check available disk space for etcd backup] ******************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:5 Wednesday 09 January 2019 15:58:49 +0100 (0:00:00.159) 0:19:23.664 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:49.613624", "stdout": "31041436", "cmd": "df --output=avail -k /var/lib/etcd/ | tail -n 1", "rc": 0, "start": "2019-01-09 15:58:49.606675", "stderr": "", "delta": "0:00:00.006949", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "_raw_params": "df --output=avail -k /var/lib/etcd/ | tail -n 1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "cmd": "df --output=avail -k /var/lib/etcd/ | tail -n 1", "delta": "0:00:00.006949", "end": "2019-01-09 15:58:49.613624", "invocation": { "module_args": { "_raw_params": "df --output=avail -k /var/lib/etcd/ | tail -n 1", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:49.606675", "stderr": "", "stderr_lines": [], "stdout": "31041436", "stdout_lines": [ "31041436" ] } TASK [etcd : Check current etcd disk usage] ********************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:13 Wednesday 09 January 2019 15:58:49 +0100 (0:00:00.290) 0:19:23.954 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:49.911803", "stdout": "644048", "cmd": "du --exclude=\'*openshift-backup*\' -k /var/lib/etcd/ | tail -n 1 | cut -f1", "rc": 0, "start": "2019-01-09 15:58:49.904956", "stderr": "", "delta": "0:00:00.006847", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "_raw_params": "du --exclude=\'*openshift-backup*\' -k /var/lib/etcd/ | tail -n 1 | cut -f1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "cmd": "du --exclude='*openshift-backup*' -k /var/lib/etcd/ | tail -n 1 | cut -f1", "delta": "0:00:00.006847", "end": "2019-01-09 15:58:49.911803", "invocation": { "module_args": { "_raw_params": "du --exclude='*openshift-backup*' -k /var/lib/etcd/ | tail -n 1 | cut -f1", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:49.904956", "stderr": "", "stderr_lines": [], "stdout": "644048", "stdout_lines": [ "644048" ] } TASK [etcd : Abort if insufficient disk space for etcd backup] ************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:20 Wednesday 09 January 2019 15:58:50 +0100 (0:00:00.292) 0:19:24.247 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [etcd : Check selinux label of '/var/lib/etcd/'] *********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:32 Wednesday 09 January 2019 15:58:50 +0100 (0:00:00.114) 0:19:24.361 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:50.317289", "stdout": "system_u:object_r:container_file_t:s0", "cmd": ["stat", "-c", "%C", "/var/lib/etcd/"], "rc": 0, "start": "2019-01-09 15:58:50.314310", "stderr": "", "delta": "0:00:00.002979", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "stat -c \'%C\' /var/lib/etcd/", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "stat", "-c", "%C", "/var/lib/etcd/" ], "delta": "0:00:00.002979", "end": "2019-01-09 15:58:50.317289", "invocation": { "module_args": { "_raw_params": "stat -c '%C' /var/lib/etcd/", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:50.314310", "stderr": "", "stderr_lines": [], "stdout": "system_u:object_r:container_file_t:s0", "stdout_lines": [ "system_u:object_r:container_file_t:s0" ] } TASK [etcd : debug] ********************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:37 Wednesday 09 January 2019 15:58:50 +0100 (0:00:00.290) 0:19:24.652 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "msg": { "changed": true, "cmd": [ "stat", "-c", "%C", "/var/lib/etcd/" ], "delta": "0:00:00.002979", "end": "2019-01-09 15:58:50.317289", "failed": false, "rc": 0, "start": "2019-01-09 15:58:50.314310", "stderr": "", "stderr_lines": [], "stdout": "system_u:object_r:container_file_t:s0", "stdout_lines": [ "system_u:object_r:container_file_t:s0" ] } } TASK [etcd : Make sure the '/var/lib/etcd/' has the proper label] *********************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:40 Wednesday 09 January 2019 15:58:50 +0100 (0:00:00.135) 0:19:24.787 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:50.740327", "stdout": "", "cmd": ["chcon", "-t", "svirt_sandbox_file_t", "/var/lib/etcd/"], "rc": 0, "start": "2019-01-09 15:58:50.737249", "stderr": "", "delta": "0:00:00.003078", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "chcon -t svirt_sandbox_file_t \\"/var/lib/etcd/\\"", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "chcon", "-t", "svirt_sandbox_file_t", "/var/lib/etcd/" ], "delta": "0:00:00.003078", "end": "2019-01-09 15:58:50.740327", "invocation": { "module_args": { "_raw_params": "chcon -t svirt_sandbox_file_t \"/var/lib/etcd/\"", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:50.737249", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } TASK [etcd : Generate etcd backup] ****************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:47 Wednesday 09 January 2019 15:58:50 +0100 (0:00:00.279) 0:19:25.067 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:53.266828", "stdout": "", "cmd": ["/usr/local/bin/master-exec", "etcd", "etcd", "etcdctl", "backup", "--data-dir=/var/lib/etcd/", "--backup-dir=/var/lib/etcd//openshift-backup-post-3.0-20190109155848"], "rc": 0, "start": "2019-01-09 15:58:50.998984", "stderr": "2019-01-09 14:58:53.217375 I | wal: segmented wal file /var/lib/etcd/openshift-backup-post-3.0-20190109155848/member/wal/0000000000000001-0000000006a38344.wal is created", "delta": "0:00:02.267844", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "/usr/local/bin/master-exec etcd etcd etcdctl backup --data-dir=/var/lib/etcd/ --backup-dir=/var/lib/etcd//openshift-backup-post-3.0-20190109155848", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "/usr/local/bin/master-exec", "etcd", "etcd", "etcdctl", "backup", "--data-dir=/var/lib/etcd/", "--backup-dir=/var/lib/etcd//openshift-backup-post-3.0-20190109155848" ], "delta": "0:00:02.267844", "end": "2019-01-09 15:58:53.266828", "invocation": { "module_args": { "_raw_params": "/usr/local/bin/master-exec etcd etcd etcdctl backup --data-dir=/var/lib/etcd/ --backup-dir=/var/lib/etcd//openshift-backup-post-3.0-20190109155848", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:50.998984", "stderr": "2019-01-09 14:58:53.217375 I | wal: segmented wal file /var/lib/etcd/openshift-backup-post-3.0-20190109155848/member/wal/0000000000000001-0000000006a38344.wal is created", "stderr_lines": [ "2019-01-09 14:58:53.217375 I | wal: segmented wal file /var/lib/etcd/openshift-backup-post-3.0-20190109155848/member/wal/0000000000000001-0000000006a38344.wal is created" ], "stdout": "", "stdout_lines": [] } TASK [etcd : Check for v3 data store] *************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:54 Wednesday 09 January 2019 15:58:53 +0100 (0:00:02.544) 0:19:27.611 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/var/lib/etcd//member/snap/db", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 996, "exists": true, "woth": false, "device_type": 0, "mtime": 1547045932.609849, "block_size": 4096, "inode": 519580, "isgid": false, "size": 213491712, "wgrp": false, "executable": false, "isuid": false, "readable": true, "isreg": true, "pw_name": "etcd", "gid": 993, "ischr": false, "wusr": true, "writeable": true, "blocks": 412824, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": false, "gr_name": "etcd", "path": "/var/lib/etcd//member/snap/db", "xusr": false, "atime": 1547045914.8845086, "isdir": false, "ctime": 1547045932.609849, "isblk": false, "xgrp": false, "dev": 64771, "roth": false, "isfifo": false, "mode": "0600", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/var/lib/etcd//member/snap/db" } }, "stat": { "atime": 1547045914.8845086, "block_size": 4096, "blocks": 412824, "ctime": 1547045932.609849, "dev": 64771, "device_type": 0, "executable": false, "exists": true, "gid": 993, "gr_name": "etcd", "inode": 519580, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0600", "mtime": 1547045932.609849, "nlink": 1, "path": "/var/lib/etcd//member/snap/db", "pw_name": "etcd", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 213491712, "uid": 996, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [etcd : Copy etcd v3 data store] *************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:62 Wednesday 09 January 2019 15:58:53 +0100 (0:00:00.428) 0:19:28.040 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:54.151116", "stdout": "", "cmd": ["cp", "-a", "/var/lib/etcd//member/snap/db", "/var/lib/etcd//openshift-backup-post-3.0-20190109155848/member/snap/"], "rc": 0, "start": "2019-01-09 15:58:53.989143", "stderr": "", "delta": "0:00:00.161973", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "cp -a /var/lib/etcd//member/snap/db /var/lib/etcd//openshift-backup-post-3.0-20190109155848/member/snap/", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "cp", "-a", "/var/lib/etcd//member/snap/db", "/var/lib/etcd//openshift-backup-post-3.0-20190109155848/member/snap/" ], "delta": "0:00:00.161973", "end": "2019-01-09 15:58:54.151116", "invocation": { "module_args": { "_raw_params": "cp -a /var/lib/etcd//member/snap/db /var/lib/etcd//openshift-backup-post-3.0-20190109155848/member/snap/", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:58:53.989143", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } TASK [etcd : set_fact] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:68 Wednesday 09 January 2019 15:58:54 +0100 (0:00:00.443) 0:19:28.483 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "r_etcd_common_backup_complete": true }, "changed": false } TASK [etcd : Display location of etcd backup] ******************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/etcd/tasks/backup/backup.yml:71 Wednesday 09 January 2019 15:58:54 +0100 (0:00:00.130) 0:19:28.613 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "msg": "Etcd backup created in /var/lib/etcd//openshift-backup-post-3.0-20190109155848" } META: ran handlers META: ran handlers PLAY [Gate on etcd backup] ************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/private/upgrade_backup.yml:17 Wednesday 09 January 2019 15:58:54 +0100 (0:00:00.131) 0:19:28.745 ***** ok: [localhost] => { "ansible_facts": { "etcd_backup_completed": [ "sp-os-master01.os.ad.scanplus.de" ] }, "changed": false } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/private/upgrade_backup.yml:21 Wednesday 09 January 2019 15:58:54 +0100 (0:00:00.214) 0:19:28.960 ***** ok: [localhost] => { "ansible_facts": { "etcd_backup_failed": [] }, "changed": false } TASK [fail] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/private/upgrade_backup.yml:23 Wednesday 09 January 2019 15:58:54 +0100 (0:00:00.116) 0:19:29.076 ***** skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Determine if service signer cert must be created] ********************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [Determine if service signer certificate must be created] ************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:11 Wednesday 09 January 2019 15:58:54 +0100 (0:00:00.116) 0:19:29.192 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/etc/origin/master/service-signer.crt", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1517401935.3653686, "block_size": 4096, "inode": 397670, "isgid": false, "size": 1115, "wgrp": false, "executable": false, "isuid": false, "readable": true, "isreg": true, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/etc/origin/master/service-signer.crt", "xusr": false, "atime": 1547018001.0845912, "isdir": false, "ctime": 1517401935.3653686, "isblk": false, "xgrp": false, "dev": 64769, "roth": true, "isfifo": false, "mode": "0644", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/etc/origin/master/service-signer.crt" } }, "stat": { "atime": 1547018001.0845912, "block_size": 4096, "blocks": 8, "ctime": 1517401935.3653686, "dev": 64769, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 397670, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1517401935.3653686, "nlink": 1, "path": "/etc/origin/master/service-signer.crt", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1115, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [openshift_control_plane : verify API server] ************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/verify_api_server.yml:3 Wednesday 09 January 2019 15:58:55 +0100 (0:00:00.293) 0:19:29.486 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:55.537090", "stdout": "ok", "cmd": ["curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready"], "rc": 0, "start": "2019-01-09 15:58:55.415309", "stderr": "", "delta": "0:00:00.121781", "invocation": {"module_args": {"warn": false, "executable": null, "_uses_shell": false, "_raw_params": "curl --silent --tlsv1.2 --max-time 2 --cacert /etc/origin/master/ca-bundle.crt https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "cmd": [ "curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready" ], "delta": "0:00:00.121781", "end": "2019-01-09 15:58:55.537090", "invocation": { "module_args": { "_raw_params": "curl --silent --tlsv1.2 --max-time 2 --cacert /etc/origin/master/ca-bundle.crt https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": false } }, "rc": 0, "start": "2019-01-09 15:58:55.415309", "stderr": "", "stderr_lines": [], "stdout": "ok", "stdout_lines": [ "ok" ] } META: ran handlers META: ran handlers PLAY [Create local temp directory for syncing certs] ************************************************************************************************************************************************************************************************************************************************************************ META: ran handlers TASK [Create local temp directory for syncing certs] ************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/create_service_signer_cert.yml:7 Wednesday 09 January 2019 15:58:55 +0100 (0:00:00.460) 0:19:29.947 ***** skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Chmod local temp directory] ******************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/create_service_signer_cert.yml:13 Wednesday 09 January 2019 15:58:55 +0100 (0:00:00.253) 0:19:30.200 ***** skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Create service signer certificate] ************************************************************************************************************************************************************************************************************************************************************************************ META: ran handlers TASK [Create remote temp directory for creating certs] ********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/create_service_signer_cert.yml:23 Wednesday 09 January 2019 15:58:56 +0100 (0:00:00.200) 0:19:30.400 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Create service signer certificate] ************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/create_service_signer_cert.yml:29 Wednesday 09 January 2019 15:58:56 +0100 (0:00:00.204) 0:19:30.604 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Retrieve service signer certificate] ********************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/create_service_signer_cert.yml:40 Wednesday 09 January 2019 15:58:56 +0100 (0:00:00.197) 0:19:30.802 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => (item=service-signer.crt) => { "changed": false, "item": "service-signer.crt", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=service-signer.key) => { "changed": false, "item": "service-signer.key", "skip_reason": "Conditional result was False" } TASK [Delete remote temp directory] ***************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/create_service_signer_cert.yml:52 Wednesday 09 January 2019 15:58:56 +0100 (0:00:00.286) 0:19:31.088 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Deploy service signer certificate] ************************************************************************************************************************************************************************************************************************************************************************************ META: ran handlers TASK [Deploy service signer certificate] ************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/create_service_signer_cert.yml:62 Wednesday 09 January 2019 15:58:57 +0100 (0:00:00.205) 0:19:31.294 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => (item=service-signer.crt) => { "changed": false, "item": "service-signer.crt", "skip_reason": "Conditional result was False" } skipping: [sp-os-master01.os.ad.scanplus.de] => (item=service-signer.key) => { "changed": false, "item": "service-signer.key", "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Delete local temp directory] ****************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [Delete local temp directory] ****************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/create_service_signer_cert.yml:76 Wednesday 09 January 2019 15:58:57 +0100 (0:00:00.277) 0:19:31.571 ***** skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Pre master upgrade - Upgrade all storage] ***************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [openshift_control_plane : Wait for APIs to become available] ********************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:2 Wednesday 09 January 2019 15:58:57 +0100 (0:00:00.204) 0:19:31.776 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:58.061574", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"apps.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"deploymentconfigs\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"DeploymentConfig\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"],\\"shortNames\\":[\\"dc\\"],\\"categories\\":[\\"all\\"]},{\\"name\\":\\"deploymentconfigs/instantiate\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"DeploymentRequest\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"deploymentconfigs/log\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"DeploymentLog\\",\\"verbs\\":[\\"get\\"]},{\\"name\\":\\"deploymentconfigs/rollback\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"DeploymentConfigRollback\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"deploymentconfigs/scale\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"group\\":\\"extensions\\",\\"version\\":\\"v1beta1\\",\\"kind\\":\\"Scale\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"deploymentconfigs/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"DeploymentConfig\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/apps.openshift.io/v1"], "rc": 0, "start": "2019-01-09 15:58:57.873261", "stderr": "", "delta": "0:00:00.188313", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/apps.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=apps.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/apps.openshift.io/v1" ], "delta": "0:00:00.188313", "end": "2019-01-09 15:58:58.061574", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/apps.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "apps.openshift.io", "rc": 0, "start": "2019-01-09 15:58:57.873261", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"apps.openshift.io/v1\",\"resources\":[{\"name\":\"deploymentconfigs\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentConfig\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"dc\"],\"categories\":[\"all\"]},{\"name\":\"deploymentconfigs/instantiate\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentRequest\",\"verbs\":[\"create\"]},{\"name\":\"deploymentconfigs/log\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentLog\",\"verbs\":[\"get\"]},{\"name\":\"deploymentconfigs/rollback\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentConfigRollback\",\"verbs\":[\"create\"]},{\"name\":\"deploymentconfigs/scale\",\"singularName\":\"\",\"namespaced\":true,\"group\":\"extensions\",\"version\":\"v1beta1\",\"kind\":\"Scale\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"deploymentconfigs/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentConfig\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"apps.openshift.io/v1\",\"resources\":[{\"name\":\"deploymentconfigs\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentConfig\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"dc\"],\"categories\":[\"all\"]},{\"name\":\"deploymentconfigs/instantiate\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentRequest\",\"verbs\":[\"create\"]},{\"name\":\"deploymentconfigs/log\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentLog\",\"verbs\":[\"get\"]},{\"name\":\"deploymentconfigs/rollback\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentConfigRollback\",\"verbs\":[\"create\"]},{\"name\":\"deploymentconfigs/scale\",\"singularName\":\"\",\"namespaced\":true,\"group\":\"extensions\",\"version\":\"v1beta1\",\"kind\":\"Scale\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"deploymentconfigs/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentConfig\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:58.424163", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"authorization.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"clusterrolebindings\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterRoleBinding\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"clusterroles\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterRole\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"localresourceaccessreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"LocalResourceAccessReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"localsubjectaccessreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"LocalSubjectAccessReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"resourceaccessreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ResourceAccessReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"rolebindingrestrictions\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"RoleBindingRestriction\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"rolebindings\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"RoleBinding\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"roles\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Role\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"selfsubjectrulesreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"SelfSubjectRulesReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"subjectaccessreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"SubjectAccessReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"subjectrulesreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"SubjectRulesReview\\",\\"verbs\\":[\\"create\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/authorization.openshift.io/v1"], "rc": 0, "start": "2019-01-09 15:58:58.226799", "stderr": "", "delta": "0:00:00.197364", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/authorization.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=authorization.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/authorization.openshift.io/v1" ], "delta": "0:00:00.197364", "end": "2019-01-09 15:58:58.424163", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/authorization.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "authorization.openshift.io", "rc": 0, "start": "2019-01-09 15:58:58.226799", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"authorization.openshift.io/v1\",\"resources\":[{\"name\":\"clusterrolebindings\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterRoleBinding\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"clusterroles\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterRole\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"localresourceaccessreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"LocalResourceAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"localsubjectaccessreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"LocalSubjectAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"resourceaccessreviews\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ResourceAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"rolebindingrestrictions\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"RoleBindingRestriction\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"rolebindings\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"RoleBinding\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"roles\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Role\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"selfsubjectrulesreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"SelfSubjectRulesReview\",\"verbs\":[\"create\"]},{\"name\":\"subjectaccessreviews\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"SubjectAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"subjectrulesreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"SubjectRulesReview\",\"verbs\":[\"create\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"authorization.openshift.io/v1\",\"resources\":[{\"name\":\"clusterrolebindings\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterRoleBinding\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"clusterroles\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterRole\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"localresourceaccessreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"LocalResourceAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"localsubjectaccessreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"LocalSubjectAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"resourceaccessreviews\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ResourceAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"rolebindingrestrictions\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"RoleBindingRestriction\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"rolebindings\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"RoleBinding\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"roles\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Role\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"selfsubjectrulesreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"SelfSubjectRulesReview\",\"verbs\":[\"create\"]},{\"name\":\"subjectaccessreviews\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"SubjectAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"subjectrulesreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"SubjectRulesReview\",\"verbs\":[\"create\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:58.733477", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"build.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"buildconfigs\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"BuildConfig\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"],\\"shortNames\\":[\\"bc\\"],\\"categories\\":[\\"all\\"]},{\\"name\\":\\"buildconfigs/instantiate\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"BuildRequest\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"buildconfigs/instantiatebinary\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"BinaryBuildRequestOptions\\",\\"verbs\\":[]},{\\"name\\":\\"buildconfigs/webhooks\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Build\\",\\"verbs\\":[]},{\\"name\\":\\"builds\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Build\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"],\\"categories\\":[\\"all\\"]},{\\"name\\":\\"builds/clone\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"BuildRequest\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"builds/details\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Build\\",\\"verbs\\":[\\"update\\"]},{\\"name\\":\\"builds/log\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"BuildLog\\",\\"verbs\\":[\\"get\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/build.openshift.io/v1"], "rc": 0, "start": "2019-01-09 15:58:58.564615", "stderr": "", "delta": "0:00:00.168862", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/build.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=build.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/build.openshift.io/v1" ], "delta": "0:00:00.168862", "end": "2019-01-09 15:58:58.733477", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/build.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "build.openshift.io", "rc": 0, "start": "2019-01-09 15:58:58.564615", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"build.openshift.io/v1\",\"resources\":[{\"name\":\"buildconfigs\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildConfig\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"bc\"],\"categories\":[\"all\"]},{\"name\":\"buildconfigs/instantiate\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildRequest\",\"verbs\":[\"create\"]},{\"name\":\"buildconfigs/instantiatebinary\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BinaryBuildRequestOptions\",\"verbs\":[]},{\"name\":\"buildconfigs/webhooks\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Build\",\"verbs\":[]},{\"name\":\"builds\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Build\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"categories\":[\"all\"]},{\"name\":\"builds/clone\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildRequest\",\"verbs\":[\"create\"]},{\"name\":\"builds/details\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Build\",\"verbs\":[\"update\"]},{\"name\":\"builds/log\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildLog\",\"verbs\":[\"get\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"build.openshift.io/v1\",\"resources\":[{\"name\":\"buildconfigs\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildConfig\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"bc\"],\"categories\":[\"all\"]},{\"name\":\"buildconfigs/instantiate\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildRequest\",\"verbs\":[\"create\"]},{\"name\":\"buildconfigs/instantiatebinary\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BinaryBuildRequestOptions\",\"verbs\":[]},{\"name\":\"buildconfigs/webhooks\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Build\",\"verbs\":[]},{\"name\":\"builds\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Build\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"categories\":[\"all\"]},{\"name\":\"builds/clone\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildRequest\",\"verbs\":[\"create\"]},{\"name\":\"builds/details\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Build\",\"verbs\":[\"update\"]},{\"name\":\"builds/log\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildLog\",\"verbs\":[\"get\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:59.054139", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"image.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"images\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"Image\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"imagesignatures\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ImageSignature\\",\\"verbs\\":[\\"create\\",\\"delete\\"]},{\\"name\\":\\"imagestreamimages\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ImageStreamImage\\",\\"verbs\\":[\\"get\\"],\\"shortNames\\":[\\"isimage\\"]},{\\"name\\":\\"imagestreamimports\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ImageStreamImport\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"imagestreammappings\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ImageStreamMapping\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"imagestreams\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ImageStream\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"],\\"shortNames\\":[\\"is\\"],\\"categories\\":[\\"all\\"]},{\\"name\\":\\"imagestreams/layers\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ImageStreamLayers\\",\\"verbs\\":[\\"get\\"]},{\\"name\\":\\"imagestreams/secrets\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"SecretList\\",\\"verbs\\":[\\"get\\"]},{\\"name\\":\\"imagestreams/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ImageStream\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"imagestreamtags\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ImageStreamTag\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\"],\\"shortNames\\":[\\"istag\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/image.openshift.io/v1"], "rc": 0, "start": "2019-01-09 15:58:58.883099", "stderr": "", "delta": "0:00:00.171040", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/image.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=image.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/image.openshift.io/v1" ], "delta": "0:00:00.171040", "end": "2019-01-09 15:58:59.054139", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/image.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "image.openshift.io", "rc": 0, "start": "2019-01-09 15:58:58.883099", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"image.openshift.io/v1\",\"resources\":[{\"name\":\"images\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Image\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"imagesignatures\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ImageSignature\",\"verbs\":[\"create\",\"delete\"]},{\"name\":\"imagestreamimages\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamImage\",\"verbs\":[\"get\"],\"shortNames\":[\"isimage\"]},{\"name\":\"imagestreamimports\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamImport\",\"verbs\":[\"create\"]},{\"name\":\"imagestreammappings\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamMapping\",\"verbs\":[\"create\"]},{\"name\":\"imagestreams\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStream\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"is\"],\"categories\":[\"all\"]},{\"name\":\"imagestreams/layers\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamLayers\",\"verbs\":[\"get\"]},{\"name\":\"imagestreams/secrets\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"SecretList\",\"verbs\":[\"get\"]},{\"name\":\"imagestreams/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStream\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"imagestreamtags\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamTag\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"],\"shortNames\":[\"istag\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"image.openshift.io/v1\",\"resources\":[{\"name\":\"images\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Image\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"imagesignatures\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ImageSignature\",\"verbs\":[\"create\",\"delete\"]},{\"name\":\"imagestreamimages\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamImage\",\"verbs\":[\"get\"],\"shortNames\":[\"isimage\"]},{\"name\":\"imagestreamimports\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamImport\",\"verbs\":[\"create\"]},{\"name\":\"imagestreammappings\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamMapping\",\"verbs\":[\"create\"]},{\"name\":\"imagestreams\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStream\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"is\"],\"categories\":[\"all\"]},{\"name\":\"imagestreams/layers\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamLayers\",\"verbs\":[\"get\"]},{\"name\":\"imagestreams/secrets\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"SecretList\",\"verbs\":[\"get\"]},{\"name\":\"imagestreams/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStream\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"imagestreamtags\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamTag\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"],\"shortNames\":[\"istag\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:59.379218", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"network.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"clusternetworks\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterNetwork\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"egressnetworkpolicies\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"EgressNetworkPolicy\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"hostsubnets\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"HostSubnet\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"netnamespaces\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"NetNamespace\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/network.openshift.io/v1"], "rc": 0, "start": "2019-01-09 15:58:59.200337", "stderr": "", "delta": "0:00:00.178881", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/network.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=network.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/network.openshift.io/v1" ], "delta": "0:00:00.178881", "end": "2019-01-09 15:58:59.379218", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/network.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "network.openshift.io", "rc": 0, "start": "2019-01-09 15:58:59.200337", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"network.openshift.io/v1\",\"resources\":[{\"name\":\"clusternetworks\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterNetwork\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"egressnetworkpolicies\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"EgressNetworkPolicy\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"hostsubnets\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"HostSubnet\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"netnamespaces\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"NetNamespace\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"network.openshift.io/v1\",\"resources\":[{\"name\":\"clusternetworks\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterNetwork\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"egressnetworkpolicies\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"EgressNetworkPolicy\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"hostsubnets\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"HostSubnet\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"netnamespaces\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"NetNamespace\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:58:59.694549", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"oauth.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"oauthaccesstokens\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"OAuthAccessToken\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"oauthauthorizetokens\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"OAuthAuthorizeToken\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"oauthclientauthorizations\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"OAuthClientAuthorization\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"oauthclients\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"OAuthClient\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/oauth.openshift.io/v1"], "rc": 0, "start": "2019-01-09 15:58:59.520785", "stderr": "", "delta": "0:00:00.173764", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/oauth.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' changed: [sp-os-master01.os.ad.scanplus.de] => (item=oauth.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/oauth.openshift.io/v1" ], "delta": "0:00:00.173764", "end": "2019-01-09 15:58:59.694549", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/oauth.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "oauth.openshift.io", "rc": 0, "start": "2019-01-09 15:58:59.520785", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"oauth.openshift.io/v1\",\"resources\":[{\"name\":\"oauthaccesstokens\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthAccessToken\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"oauthauthorizetokens\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthAuthorizeToken\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"oauthclientauthorizations\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthClientAuthorization\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"oauthclients\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthClient\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"oauth.openshift.io/v1\",\"resources\":[{\"name\":\"oauthaccesstokens\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthAccessToken\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"oauthauthorizetokens\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthAuthorizeToken\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"oauthclientauthorizations\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthClientAuthorization\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"oauthclients\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthClient\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}" ] } (0, '\n{"changed": true, "end": "2019-01-09 15:59:00.041228", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"project.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"projectrequests\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ProjectRequest\\",\\"verbs\\":[\\"create\\",\\"list\\"]},{\\"name\\":\\"projects\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"Project\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/project.openshift.io/v1"], "rc": 0, "start": "2019-01-09 15:58:59.860785", "stderr": "", "delta": "0:00:00.180443", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/project.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=project.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/project.openshift.io/v1" ], "delta": "0:00:00.180443", "end": "2019-01-09 15:59:00.041228", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/project.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "project.openshift.io", "rc": 0, "start": "2019-01-09 15:58:59.860785", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"project.openshift.io/v1\",\"resources\":[{\"name\":\"projectrequests\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ProjectRequest\",\"verbs\":[\"create\",\"list\"]},{\"name\":\"projects\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Project\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"project.openshift.io/v1\",\"resources\":[{\"name\":\"projectrequests\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ProjectRequest\",\"verbs\":[\"create\",\"list\"]},{\"name\":\"projects\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Project\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:59:00.371184", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"quota.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"appliedclusterresourcequotas\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"AppliedClusterResourceQuota\\",\\"verbs\\":[\\"get\\",\\"list\\"]},{\\"name\\":\\"clusterresourcequotas\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterResourceQuota\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"],\\"shortNames\\":[\\"clusterquota\\"]},{\\"name\\":\\"clusterresourcequotas/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterResourceQuota\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/quota.openshift.io/v1"], "rc": 0, "start": "2019-01-09 15:59:00.188058", "stderr": "", "delta": "0:00:00.183126", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/quota.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=quota.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/quota.openshift.io/v1" ], "delta": "0:00:00.183126", "end": "2019-01-09 15:59:00.371184", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/quota.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "quota.openshift.io", "rc": 0, "start": "2019-01-09 15:59:00.188058", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"quota.openshift.io/v1\",\"resources\":[{\"name\":\"appliedclusterresourcequotas\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"AppliedClusterResourceQuota\",\"verbs\":[\"get\",\"list\"]},{\"name\":\"clusterresourcequotas\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterResourceQuota\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"clusterquota\"]},{\"name\":\"clusterresourcequotas/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterResourceQuota\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"quota.openshift.io/v1\",\"resources\":[{\"name\":\"appliedclusterresourcequotas\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"AppliedClusterResourceQuota\",\"verbs\":[\"get\",\"list\"]},{\"name\":\"clusterresourcequotas\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterResourceQuota\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"clusterquota\"]},{\"name\":\"clusterresourcequotas/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterResourceQuota\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:59:00.729588", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"route.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"routes\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Route\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"],\\"categories\\":[\\"all\\"]},{\\"name\\":\\"routes/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Route\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/route.openshift.io/v1"], "rc": 0, "start": "2019-01-09 15:59:00.539236", "stderr": "", "delta": "0:00:00.190352", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/route.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=route.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/route.openshift.io/v1" ], "delta": "0:00:00.190352", "end": "2019-01-09 15:59:00.729588", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/route.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "route.openshift.io", "rc": 0, "start": "2019-01-09 15:59:00.539236", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"route.openshift.io/v1\",\"resources\":[{\"name\":\"routes\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Route\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"categories\":[\"all\"]},{\"name\":\"routes/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Route\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"route.openshift.io/v1\",\"resources\":[{\"name\":\"routes\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Route\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"categories\":[\"all\"]},{\"name\":\"routes/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Route\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:59:01.051478", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"security.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"podsecuritypolicyreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"PodSecurityPolicyReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"podsecuritypolicyselfsubjectreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"PodSecurityPolicySelfSubjectReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"podsecuritypolicysubjectreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"PodSecurityPolicySubjectReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"rangeallocations\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"RangeAllocation\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"securitycontextconstraints\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"SecurityContextConstraints\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"],\\"shortNames\\":[\\"scc\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/security.openshift.io/v1"], "rc": 0, "start": "2019-01-09 15:59:00.876183", "stderr": "", "delta": "0:00:00.175295", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/security.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=security.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/security.openshift.io/v1" ], "delta": "0:00:00.175295", "end": "2019-01-09 15:59:01.051478", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/security.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "security.openshift.io", "rc": 0, "start": "2019-01-09 15:59:00.876183", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"security.openshift.io/v1\",\"resources\":[{\"name\":\"podsecuritypolicyreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"PodSecurityPolicyReview\",\"verbs\":[\"create\"]},{\"name\":\"podsecuritypolicyselfsubjectreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"PodSecurityPolicySelfSubjectReview\",\"verbs\":[\"create\"]},{\"name\":\"podsecuritypolicysubjectreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"PodSecurityPolicySubjectReview\",\"verbs\":[\"create\"]},{\"name\":\"rangeallocations\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"RangeAllocation\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"securitycontextconstraints\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"SecurityContextConstraints\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"scc\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"security.openshift.io/v1\",\"resources\":[{\"name\":\"podsecuritypolicyreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"PodSecurityPolicyReview\",\"verbs\":[\"create\"]},{\"name\":\"podsecuritypolicyselfsubjectreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"PodSecurityPolicySelfSubjectReview\",\"verbs\":[\"create\"]},{\"name\":\"podsecuritypolicysubjectreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"PodSecurityPolicySubjectReview\",\"verbs\":[\"create\"]},{\"name\":\"rangeallocations\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"RangeAllocation\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"securitycontextconstraints\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"SecurityContextConstraints\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"scc\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:59:01.373901", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"template.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"brokertemplateinstances\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"BrokerTemplateInstance\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"processedtemplates\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Template\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"templateinstances\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"TemplateInstance\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"templateinstances/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"TemplateInstance\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"templates\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Template\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/template.openshift.io/v1"], "rc": 0, "start": "2019-01-09 15:59:01.194736", "stderr": "", "delta": "0:00:00.179165", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/template.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=template.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/template.openshift.io/v1" ], "delta": "0:00:00.179165", "end": "2019-01-09 15:59:01.373901", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/template.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "template.openshift.io", "rc": 0, "start": "2019-01-09 15:59:01.194736", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"template.openshift.io/v1\",\"resources\":[{\"name\":\"brokertemplateinstances\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"BrokerTemplateInstance\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"processedtemplates\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Template\",\"verbs\":[\"create\"]},{\"name\":\"templateinstances\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"TemplateInstance\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"templateinstances/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"TemplateInstance\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"templates\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Template\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"template.openshift.io/v1\",\"resources\":[{\"name\":\"brokertemplateinstances\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"BrokerTemplateInstance\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"processedtemplates\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Template\",\"verbs\":[\"create\"]},{\"name\":\"templateinstances\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"TemplateInstance\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"templateinstances/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"TemplateInstance\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"templates\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Template\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:59:01.689506", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"user.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"groups\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"Group\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"identities\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"Identity\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"useridentitymappings\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"UserIdentityMapping\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"users\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"User\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/user.openshift.io/v1"], "rc": 0, "start": "2019-01-09 15:59:01.525580", "stderr": "", "delta": "0:00:00.163926", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/user.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=user.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/user.openshift.io/v1" ], "delta": "0:00:00.163926", "end": "2019-01-09 15:59:01.689506", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/user.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "user.openshift.io", "rc": 0, "start": "2019-01-09 15:59:01.525580", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"user.openshift.io/v1\",\"resources\":[{\"name\":\"groups\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Group\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"identities\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Identity\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"useridentitymappings\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"UserIdentityMapping\",\"verbs\":[\"create\",\"delete\",\"get\",\"patch\",\"update\"]},{\"name\":\"users\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"User\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"user.openshift.io/v1\",\"resources\":[{\"name\":\"groups\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Group\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"identities\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Identity\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"useridentitymappings\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"UserIdentityMapping\",\"verbs\":[\"create\",\"delete\",\"get\",\"patch\",\"update\"]},{\"name\":\"users\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"User\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}" ] } TASK [openshift_control_plane : Get API logs] ******************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:11 Wednesday 09 January 2019 15:59:01 +0100 (0:00:04.263) 0:19:36.039 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : debug] ************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:18 Wednesday 09 January 2019 15:59:01 +0100 (0:00:00.108) 0:19:36.147 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [openshift_control_plane : fail] *************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:22 Wednesday 09 January 2019 15:59:02 +0100 (0:00:00.116) 0:19:36.264 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : Check for apiservices/v1beta1.metrics.k8s.io registration] ********************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:28 Wednesday 09 January 2019 15:59:02 +0100 (0:00:00.116) 0:19:36.380 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (1, '\n{"changed": true, "end": "2019-01-09 15:59:02.588177", "stdout": "", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "apiservices/v1beta1.metrics.k8s.io"], "failed": true, "delta": "0:00:00.261630", "stderr": "No resources found.\\nError from server (NotFound): apiservices.apiregistration.k8s.io \\"v1beta1.metrics.k8s.io\\" not found", "rc": 1, "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get apiservices/v1beta1.metrics.k8s.io", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}, "start": "2019-01-09 15:59:02.326547", "msg": "non-zero return code"}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "apiservices/v1beta1.metrics.k8s.io" ], "delta": "0:00:00.261630", "end": "2019-01-09 15:59:02.588177", "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get apiservices/v1beta1.metrics.k8s.io", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "msg": "non-zero return code", "rc": 1, "start": "2019-01-09 15:59:02.326547", "stderr": "No resources found.\nError from server (NotFound): apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\" not found", "stderr_lines": [ "No resources found.", "Error from server (NotFound): apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\" not found" ], "stdout": "", "stdout_lines": [] } TASK [openshift_control_plane : Wait for /apis/metrics.k8s.io/v1beta1 when registered] ************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:37 Wednesday 09 January 2019 15:59:02 +0100 (0:00:00.555) 0:19:36.936 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : Check for apiservices/v1beta1.servicecatalog.k8s.io registration] *************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:46 Wednesday 09 January 2019 15:59:02 +0100 (0:00:00.110) 0:19:37.047 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:59:03.231416", "stdout": "NAME CREATED AT\\nv1beta1.servicecatalog.k8s.io 2018-01-31T13:26:12Z", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "apiservices/v1beta1.servicecatalog.k8s.io"], "rc": 0, "start": "2019-01-09 15:59:02.980898", "stderr": "", "delta": "0:00:00.250518", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get apiservices/v1beta1.servicecatalog.k8s.io", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "apiservices/v1beta1.servicecatalog.k8s.io" ], "delta": "0:00:00.250518", "end": "2019-01-09 15:59:03.231416", "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get apiservices/v1beta1.servicecatalog.k8s.io", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:59:02.980898", "stderr": "", "stderr_lines": [], "stdout": "NAME CREATED AT\nv1beta1.servicecatalog.k8s.io 2018-01-31T13:26:12Z", "stdout_lines": [ "NAME CREATED AT", "v1beta1.servicecatalog.k8s.io 2018-01-31T13:26:12Z" ] } TASK [openshift_control_plane : Wait for /apis/servicecatalog.k8s.io/v1beta1 when registered] ******************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:56 Wednesday 09 January 2019 15:59:03 +0100 (0:00:00.521) 0:19:37.569 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 15:59:03.700437", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"servicecatalog.k8s.io/v1beta1\\",\\"resources\\":[{\\"name\\":\\"clusterservicebrokers\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterServiceBroker\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"clusterservicebrokers/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterServiceBroker\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"clusterserviceclasses\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterServiceClass\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"clusterserviceclasses/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterServiceClass\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"clusterserviceplans\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterServicePlan\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"clusterserviceplans/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterServicePlan\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"servicebindings\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ServiceBinding\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"servicebindings/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ServiceBinding\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"serviceinstances\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ServiceInstance\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"serviceinstances/reference\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ServiceInstance\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"serviceinstances/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ServiceInstance\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/servicecatalog.k8s.io/v1beta1"], "rc": 0, "start": "2019-01-09 15:59:03.510284", "stderr": "", "delta": "0:00:00.190153", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/servicecatalog.k8s.io/v1beta1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/servicecatalog.k8s.io/v1beta1" ], "delta": "0:00:00.190153", "end": "2019-01-09 15:59:03.700437", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/servicecatalog.k8s.io/v1beta1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:59:03.510284", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"servicecatalog.k8s.io/v1beta1\",\"resources\":[{\"name\":\"clusterservicebrokers\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceBroker\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"clusterservicebrokers/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceBroker\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"clusterserviceclasses\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceClass\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"clusterserviceclasses/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceClass\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"clusterserviceplans\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServicePlan\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"clusterserviceplans/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServicePlan\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"servicebindings\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceBinding\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"servicebindings/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceBinding\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"serviceinstances\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceInstance\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"serviceinstances/reference\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceInstance\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"serviceinstances/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceInstance\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"servicecatalog.k8s.io/v1beta1\",\"resources\":[{\"name\":\"clusterservicebrokers\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceBroker\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"clusterservicebrokers/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceBroker\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"clusterserviceclasses\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceClass\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"clusterserviceclasses/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceClass\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"clusterserviceplans\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServicePlan\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"clusterserviceplans/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServicePlan\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"servicebindings\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceBinding\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"servicebindings/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceBinding\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"serviceinstances\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceInstance\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"serviceinstances/reference\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceInstance\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"serviceinstances/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceInstance\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}" ] } TASK [Upgrade all storage] ************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:36 Wednesday 09 January 2019 15:59:03 +0100 (0:00:00.488) 0:19:38.057 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:00:21.476297", "stdout": "summary: total=10350 errors=0 ignored=0 unchanged=10339 migrated=11", "cmd": ["oc", "adm", "--config=/etc/origin/master/admin.kubeconfig", "migrate", "storage", "--include=*"], "rc": 0, "start": "2019-01-09 15:59:03.994724", "stderr": "", "delta": "0:01:17.481573", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc adm --config=/etc/origin/master/admin.kubeconfig migrate storage --include=*", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": true, "cmd": [ "oc", "adm", "--config=/etc/origin/master/admin.kubeconfig", "migrate", "storage", "--include=*" ], "delta": "0:01:17.481573", "end": "2019-01-09 16:00:21.476297", "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc adm --config=/etc/origin/master/admin.kubeconfig migrate storage --include=*", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 15:59:03.994724", "stderr": "", "stderr_lines": [], "stdout": "summary: total=10350 errors=0 ignored=0 unchanged=10339 migrated=11", "stdout_lines": [ "summary: total=10350 errors=0 ignored=0 unchanged=10339 migrated=11" ] } TASK [Migrate legacy HPA scale target refs] ********************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:49 Wednesday 09 January 2019 16:00:21 +0100 (0:01:17.764) 0:20:55.821 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:00:22.151537", "stdout": "summary: total=0 errors=0 ignored=0 unchanged=0 migrated=0", "cmd": ["oc", "adm", "--config=/etc/origin/master/admin.kubeconfig", "migrate", "legacy-hpa", "--confirm"], "rc": 0, "start": "2019-01-09 16:00:21.920054", "stderr": "", "delta": "0:00:00.231483", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc adm --config=/etc/origin/master/admin.kubeconfig migrate legacy-hpa --confirm", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "oc", "adm", "--config=/etc/origin/master/admin.kubeconfig", "migrate", "legacy-hpa", "--confirm" ], "delta": "0:00:00.231483", "end": "2019-01-09 16:00:22.151537", "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc adm --config=/etc/origin/master/admin.kubeconfig migrate legacy-hpa --confirm", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 16:00:21.920054", "stderr": "", "stderr_lines": [], "stdout": "summary: total=0 errors=0 ignored=0 unchanged=0 migrated=0", "stdout_lines": [ "summary: total=0 errors=0 ignored=0 unchanged=0 migrated=0" ] } META: ran handlers META: ran handlers PLAY [Set OpenShift master facts and image prepull] ************************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [openshift_master_facts : Verify required variables are set] *********************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:2 Wednesday 09 January 2019 16:00:22 +0100 (0:00:00.684) 0:20:56.506 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_master_facts : Set g_metrics_hostname] ********************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:14 Wednesday 09 January 2019 16:00:22 +0100 (0:00:00.104) 0:20:56.611 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "g_metrics_hostname": "hawkular-metrics.apps.os.ad.scanplus.de" }, "changed": false } TASK [openshift_master_facts : set_fact] ************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:20 Wednesday 09 January 2019 16:00:22 +0100 (0:00:00.432) 0:20:57.043 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_master_facts : Set master facts] **************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:24 Wednesday 09 January 2019 16:00:22 +0100 (0:00:00.109) 0:20:57.153 ***** Using module file /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "gather_subset": ["hardware", "network", "virtual", "facter"], "owner": null, "follow": false, "group": null, "gather_timeout": 10, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "role": "master", "selevel": null, "regexp": null, "src": null, "local_facts": {"cluster_public_hostname": "", "controller_args": "", "public_console_url": "", "api_port": "8443", "console_port": "", "openid_ca": "", "api_url": "", "audit_config": "", "logout_url": "", "api_use_ssl": "", "console_path": "", "registry_selector": "", "disabled_features": "", "api_server_args": "", "console_url": "", "bind_addr": "", "session_max_seconds": "", "logging_public_url": "", "cluster_hostname": "", "image_policy_allowed_registries_for_import": "", "public_api_url": "", "admission_plugin_config": {"openshift.io/ImagePolicy": {"configuration": {"kind": "ImagePolicyConfig", "executionRules": [{"skipOnResolutionFailure": true, "matchImageAnnotations": [{"key": "images.openshift.io/deny-execution", "value": "true"}], "reject": true, "name": "execution-denied", "onResources": [{"resource": "pods"}, {"resource": "builds"}]}], "apiVersion": "v1"}}}, "github_ca": "", "ldap_ca": "", "image_policy_config": "", "session_name": "", "console_use_ssl": "", "kube_admission_plugin_config": "", "registry_url": ""}, "additive_facts_to_overwrite": [], "seuser": null, "filter": "*", "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "changed": false, "ansible_facts": {"openshift": {"node": {"dns_ip": "172.30.80.240", "proxy_mode": "iptables", "nodename": "sp-os-master01.os.ad.scanplus.de", "bootstrapped": true, "sdn_mtu": "1450"}, "builddefaults": {"config": {"BuildDefaults": {"configuration": {"kind": "BuildDefaultsConfig", "resources": {"requests": {}, "limits": {}}, "env": [], "apiVersion": "v1"}}}}, "logging": {"elasticsearch": {"pvc": {}, "ops": {"pvc": {}}}}, "cloudprovider": {"kind": null}, "current_config": {"roles": ["node", "builddefaults", "logging", "cloudprovider", "master", "hosted", "docker", "buildoverrides"]}, "master": {"public_console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "api_port": "8443", "console_port": "8443", "loopback_user": "system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443", "api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "api_use_ssl": true, "console_path": "/console", "sdn_cluster_network_cidr": "172.18.0.0/17", "loopback_context_name": "default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master", "console_use_ssl": true, "console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "bind_addr": "0.0.0.0", "session_max_seconds": 3600, "cluster_method": "native", "ha": false, "loopback_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "public_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "admission_plugin_config": {"BuildDefaults": {"configuration": {"kind": "BuildDefaultsConfig", "resources": {"requests": {}, "limits": {}}, "env": [], "apiVersion": "v1"}}, "BuildOverrides": {"configuration": {"kind": "BuildOverridesConfig", "apiVersion": "v1"}}, "openshift.io/ImagePolicy": {"configuration": {"kind": "ImagePolicyConfig", "executionRules": [{"skipOnResolutionFailure": true, "matchImageAnnotations": [{"key": "images.openshift.io/deny-execution", "value": "true"}], "reject": true, "name": "execution-denied", "onResources": [{"resource": "pods"}, {"resource": "builds"}]}], "apiVersion": "v1"}}}, "named_certificates": [{"certfile": "/etc/origin/master/named_certificates/cert.crt", "keyfile": "/etc/origin/master/named_certificates/cert.key", "names": ["sp-os-master01.os.ad.scanplus.de"], "cafile": "/etc/origin/master/named_certificates/ca.crt"}], "manage_htpasswd": true, "loopback_cluster_name": "sp-os-master01-os-ad-scanplus-de:8443", "portal_net": "172.30.0.0/16", "controllers_port": "8444", "session_name": "ssn"}, "common": {"is_etcd_system_container": false, "ip": "172.30.80.240", "dns_domain": "cluster.local", "is_master_system_container": false, "public_ip": "172.30.80.240", "public_hostname": "sp-os-master01.os.ad.scanplus.de", "internal_hostnames": ["kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "etcd_runtime": "host", "rolling_restart_mode": "services", "hostname": "sp-os-master01.os.ad.scanplus.de", "deployment_subtype": "basic", "is_node_system_container": false, "is_openvswitch_system_container": false, "system_images_registry": "registry.access.redhat.com", "generate_no_proxy_hosts": true, "kube_svc_ip": "172.18.128.1", "config_base": "/etc/origin", "all_hostnames": ["kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift"], "is_containerized": false, "no_proxy_etcd_host_ips": "172.30.80.240", "raw_hostname": "sp-os-master01.os.ad.scanplus.de", "portal_net": "172.18.128.0/17", "deployment_type": "openshift-enterprise"}, "hosted": {"templates": {"kubeconfig": "/tmp/openshift-ansible-DNTbe3/admin.kubeconfig"}, "routers": [{"name": "router", "certificate": "{{ openshift_hosted_router_certificate | default({}) }}", "replicas": "{{ replicas | default(1) }}", "serviceaccount": "router", "namespace": "default", "stats_port": 1936, "edits": "{{ openshift_hosted_router_edits }}", "images": "{{ openshift_hosted_router_image | default(None) }}", "selector": "{{ openshift_hosted_router_selector | default(None) }}", "ports": ["80:80", "443:443"]}], "infra": {"selector": "region=infra"}, "registry": {"force": [false], "name": "docker-registry", "serviceaccount": "registry", "edits": [{"action": "put", "value": {"updatePeriodSeconds": 1, "timeoutSeconds": 600, "maxSurge": "25%", "intervalSeconds": 1, "maxUnavailable": "25%"}, "key": "spec.strategy.rollingParams"}], "selector": "region=infra", "cert": {"expire": {"days": 730}}, "env": {"vars": {}}, "volumes": [], "registryurl": "openshift3/ose-${component}:${version}", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}"}, "router": {"certificate": {"certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key", "cafile": "/etc/origin/master/ca.crt"}, "create_certificate": true, "image": "openshift3/ose-${component}:${version}", "selector": "region=infra", "edits": [{"action": "put", "value": 1, "key": "spec.strategy.rollingParams.intervalSeconds"}, {"action": "put", "value": 1, "key": "spec.strategy.rollingParams.updatePeriodSeconds"}, {"action": "put", "value": 21600, "key": "spec.strategy.activeDeadlineSeconds"}], "registryurl": "openshift3/ose-${component}:${version}", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}"}, "docker": {"registry": {"insecure": {"default": "{{ openshift_docker_hosted_registry_insecure | default(False) }}"}}}, "wfp": {"rc": {"phase": {"msg": "All items completed", "changed": true, "results": [{"_ansible_parsed": true, "stderr_lines": [], "rc": 0, "_ansible_item_result": true, "end": "2018-01-31 14:15:11.698797", "_ansible_no_log": false, "stdout": "Complete", "cmd": ["oc", "get", "replicationcontroller", "router-1", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .metadata.annotations.openshift\\\\.io/deployment\\\\.phase }"], "attempts": 1, "item": [{"name": "router", "certificate": {"keyfile": "/etc/origin/master/openshift-router.key", "certfile": "/etc/origin/master/openshift-router.crt", "cafile": "/etc/origin/master/ca.crt"}, "replicas": "2", "namespace": "default", "serviceaccount": "router", "stats_port": 1936, "edits": [{"action": "put", "value": 1, "key": "spec.strategy.rollingParams.intervalSeconds"}, {"action": "put", "value": 1, "key": "spec.strategy.rollingParams.updatePeriodSeconds"}, {"action": "put", "value": 21600, "key": "spec.strategy.activeDeadlineSeconds"}], "images": "openshift3/ose-${component}:${version}", "selector": "region=infra", "ports": ["80:80", "443:443"]}, {"_ansible_parsed": true, "stderr_lines": [], "_ansible_item_result": true, "end": "2018-01-31 14:15:11.096068", "_ansible_no_log": false, "stdout": "1", "cmd": ["oc", "get", "deploymentconfig", "router", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .status.latestVersion }"], "rc": 0, "item": {"name": "router", "certificate": {"certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key", "cafile": "/etc/origin/master/ca.crt"}, "replicas": "2", "namespace": "default", "serviceaccount": "router", "selector": "region=infra", "edits": [{"action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1}, {"action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600}], "images": "openshift3/ose-${component}:${version}", "stats_port": 1936, "ports": ["80:80", "443:443"]}, "delta": "0:00:00.196315", "stderr": "", "changed": true, "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc get deploymentconfig router --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath=\'{ .status.latestVersion }\'", "removes": null, "creates": null, "chdir": null, "stdin": null}}, "stdout_lines": ["1"], "start": "2018-01-31 14:15:10.899753", "_ansible_ignore_errors": null, "failed": false}], "delta": "0:00:00.199963", "stderr": "", "changed": true, "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": false, "_raw_params": "oc get replicationcontroller router-1 --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath=\'{ .metadata.annotations.openshift\\\\.io/deployment\\\\.phase }\'", "removes": null, "warn": true, "chdir": null, "stdin": null}}, "stdout_lines": ["Complete"], "failed_when_result": false, "start": "2018-01-31 14:15:11.498834", "_ansible_ignore_errors": null, "failed": false}]}}}}, "docker": {"use_crio": false, "hosted_registry_network": "172.18.128.0/17", "use_system_container": false, "hosted_registry_insecure": false}, "buildoverrides": {"config": {"BuildOverrides": {"configuration": {"kind": "BuildOverridesConfig", "apiVersion": "v1"}}}}}}}\n', "KeyError('ansible_os_family',)\n") ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift": { "builddefaults": { "config": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } } } }, "buildoverrides": { "config": { "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } } } }, "cloudprovider": { "kind": null }, "common": { "all_hostnames": [ "kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "config_base": "/etc/origin", "deployment_subtype": "basic", "deployment_type": "openshift-enterprise", "dns_domain": "cluster.local", "etcd_runtime": "host", "generate_no_proxy_hosts": true, "hostname": "sp-os-master01.os.ad.scanplus.de", "internal_hostnames": [ "kubernetes.default", "172.30.80.240", "kubernetes.default.svc.cluster.local", "kubernetes", "openshift.default", "172.18.128.1", "sp-os-master01.os.ad.scanplus.de", "openshift.default.svc", "openshift.default.svc.cluster.local", "kubernetes.default.svc", "openshift" ], "ip": "172.30.80.240", "is_containerized": false, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.18.128.1", "no_proxy_etcd_host_ips": "172.30.80.240", "portal_net": "172.18.128.0/17", "public_hostname": "sp-os-master01.os.ad.scanplus.de", "public_ip": "172.30.80.240", "raw_hostname": "sp-os-master01.os.ad.scanplus.de", "rolling_restart_mode": "services", "system_images_registry": "registry.access.redhat.com" }, "current_config": { "roles": [ "node", "builddefaults", "logging", "cloudprovider", "master", "hosted", "docker", "buildoverrides" ] }, "docker": { "hosted_registry_insecure": false, "hosted_registry_network": "172.18.128.0/17", "use_crio": false, "use_system_container": false }, "hosted": { "docker": { "registry": { "insecure": { "default": "{{ openshift_docker_hosted_registry_insecure | default(False) }}" } } }, "infra": { "selector": "region=infra" }, "registry": { "cert": { "expire": { "days": 730 } }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams", "value": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 } } ], "env": { "vars": {} }, "force": [ false ], "name": "docker-registry", "registryurl": "openshift3/ose-${component}:${version}", "selector": "region=infra", "serviceaccount": "registry", "volumes": [], "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}" }, "router": { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "create_certificate": true, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "image": "openshift3/ose-${component}:${version}", "registryurl": "openshift3/ose-${component}:${version}", "selector": "region=infra", "wait": "{{ not (openshift_master_bootstrap_enabled | default(False)) }}" }, "routers": [ { "certificate": "{{ openshift_hosted_router_certificate | default({}) }}", "edits": "{{ openshift_hosted_router_edits }}", "images": "{{ openshift_hosted_router_image | default(None) }}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "{{ replicas | default(1) }}", "selector": "{{ openshift_hosted_router_selector | default(None) }}", "serviceaccount": "router", "stats_port": 1936 } ], "templates": { "kubeconfig": "/tmp/openshift-ansible-DNTbe3/admin.kubeconfig" }, "wfp": { "rc": { "phase": { "changed": true, "msg": "All items completed", "results": [ { "_ansible_ignore_errors": null, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "attempts": 1, "changed": true, "cmd": [ "oc", "get", "replicationcontroller", "router-1", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .metadata.annotations.openshift\\.io/deployment\\.phase }" ], "delta": "0:00:00.199963", "end": "2018-01-31 14:15:11.698797", "failed": false, "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc get replicationcontroller router-1 --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath='{ .metadata.annotations.openshift\\.io/deployment\\.phase }'", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": [ { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "images": "openshift3/ose-${component}:${version}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "2", "selector": "region=infra", "serviceaccount": "router", "stats_port": 1936 }, { "_ansible_ignore_errors": null, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": true, "cmd": [ "oc", "get", "deploymentconfig", "router", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .status.latestVersion }" ], "delta": "0:00:00.196315", "end": "2018-01-31 14:15:11.096068", "failed": false, "invocation": { "module_args": { "_raw_params": "oc get deploymentconfig router --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath='{ .status.latestVersion }'", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": { "certificate": { "cafile": "/etc/origin/master/ca.crt", "certfile": "/etc/origin/master/openshift-router.crt", "keyfile": "/etc/origin/master/openshift-router.key" }, "edits": [ { "action": "put", "key": "spec.strategy.rollingParams.intervalSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.rollingParams.updatePeriodSeconds", "value": 1 }, { "action": "put", "key": "spec.strategy.activeDeadlineSeconds", "value": 21600 } ], "images": "openshift3/ose-${component}:${version}", "name": "router", "namespace": "default", "ports": [ "80:80", "443:443" ], "replicas": "2", "selector": "region=infra", "serviceaccount": "router", "stats_port": 1936 }, "rc": 0, "start": "2018-01-31 14:15:10.899753", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": [ "1" ] } ], "rc": 0, "start": "2018-01-31 14:15:11.498834", "stderr": "", "stderr_lines": [], "stdout": "Complete", "stdout_lines": [ "Complete" ] } ] } } } }, "logging": { "elasticsearch": { "ops": { "pvc": {} }, "pvc": {} } }, "master": { "admission_plugin_config": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } }, "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } }, "openshift.io/ImagePolicy": { "configuration": { "apiVersion": "v1", "executionRules": [ { "matchImageAnnotations": [ { "key": "images.openshift.io/deny-execution", "value": "true" } ], "name": "execution-denied", "onResources": [ { "resource": "pods" }, { "resource": "builds" } ], "reject": true, "skipOnResolutionFailure": true } ], "kind": "ImagePolicyConfig" } } }, "api_port": "8443", "api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "api_use_ssl": true, "bind_addr": "0.0.0.0", "cluster_method": "native", "console_path": "/console", "console_port": "8443", "console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "console_use_ssl": true, "controllers_port": "8444", "ha": false, "loopback_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "loopback_cluster_name": "sp-os-master01-os-ad-scanplus-de:8443", "loopback_context_name": "default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master", "loopback_user": "system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443", "manage_htpasswd": true, "named_certificates": [ { "cafile": "/etc/origin/master/named_certificates/ca.crt", "certfile": "/etc/origin/master/named_certificates/cert.crt", "keyfile": "/etc/origin/master/named_certificates/cert.key", "names": [ "sp-os-master01.os.ad.scanplus.de" ] } ], "portal_net": "172.30.0.0/16", "public_api_url": "https://sp-os-master01.os.ad.scanplus.de:8443", "public_console_url": "https://sp-os-master01.os.ad.scanplus.de:8443/console", "sdn_cluster_network_cidr": "172.18.0.0/17", "session_max_seconds": 3600, "session_name": "ssn" }, "node": { "bootstrapped": true, "dns_ip": "172.30.80.240", "nodename": "sp-os-master01.os.ad.scanplus.de", "proxy_mode": "iptables", "sdn_mtu": "1450" } } }, "changed": false, "invocation": { "module_args": { "additive_facts_to_overwrite": [], "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "filter": "*", "follow": false, "force": null, "gather_subset": [ "hardware", "network", "virtual", "facter" ], "gather_timeout": 10, "group": null, "local_facts": { "admission_plugin_config": { "openshift.io/ImagePolicy": { "configuration": { "apiVersion": "v1", "executionRules": [ { "matchImageAnnotations": [ { "key": "images.openshift.io/deny-execution", "value": "true" } ], "name": "execution-denied", "onResources": [ { "resource": "pods" }, { "resource": "builds" } ], "reject": true, "skipOnResolutionFailure": true } ], "kind": "ImagePolicyConfig" } } }, "api_port": "8443", "api_server_args": "", "api_url": "", "api_use_ssl": "", "audit_config": "", "bind_addr": "", "cluster_hostname": "", "cluster_public_hostname": "", "console_path": "", "console_port": "", "console_url": "", "console_use_ssl": "", "controller_args": "", "disabled_features": "", "github_ca": "", "image_policy_allowed_registries_for_import": "", "image_policy_config": "", "kube_admission_plugin_config": "", "ldap_ca": "", "logging_public_url": "", "logout_url": "", "openid_ca": "", "public_api_url": "", "public_console_url": "", "registry_selector": "", "registry_url": "", "session_max_seconds": "", "session_name": "" }, "mode": null, "owner": null, "regexp": null, "remote_src": null, "role": "master", "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "unsafe_writes": null } } } TASK [openshift_master_facts : Determine if scheduler config present] ******************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:58 Wednesday 09 January 2019 16:00:24 +0100 (0:00:01.119) 0:20:58.272 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/etc/origin/master/scheduler.json", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1536856232.619293, "block_size": 4096, "inode": 1308174, "isgid": false, "size": 1923, "wgrp": false, "executable": false, "isuid": false, "readable": true, "isreg": true, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/etc/origin/master/scheduler.json", "xusr": false, "atime": 1547017416.908331, "isdir": false, "ctime": 1536856232.843296, "isblk": false, "xgrp": false, "dev": 64769, "roth": true, "isfifo": false, "mode": "0644", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/etc/origin/master/scheduler.json" } }, "stat": { "atime": 1547017416.908331, "block_size": 4096, "blocks": 8, "ctime": 1536856232.843296, "dev": 64769, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 1308174, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1536856232.619293, "nlink": 1, "path": "/etc/origin/master/scheduler.json", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1923, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [openshift_master_facts : Set Default scheduler predicates and priorities] ********************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:66 Wednesday 09 January 2019 16:00:24 +0100 (0:00:00.317) 0:20:58.590 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_master_scheduler_default_predicates": [ { "name": "NoVolumeZoneConflict" }, { "name": "MaxEBSVolumeCount" }, { "name": "MaxGCEPDVolumeCount" }, { "name": "MaxAzureDiskVolumeCount" }, { "name": "MatchInterPodAffinity" }, { "name": "NoDiskConflict" }, { "name": "GeneralPredicates" }, { "name": "PodToleratesNodeTaints" }, { "name": "CheckNodeMemoryPressure" }, { "name": "CheckNodeDiskPressure" }, { "name": "CheckVolumeBinding" }, { "argument": { "serviceAffinity": { "labels": [ "region" ] } }, "name": "Region" } ], "openshift_master_scheduler_default_priorities": [ { "name": "SelectorSpreadPriority", "weight": 1 }, { "name": "InterPodAffinityPriority", "weight": 1 }, { "name": "LeastRequestedPriority", "weight": 1 }, { "name": "BalancedResourceAllocation", "weight": 1 }, { "name": "NodePreferAvoidPodsPriority", "weight": 10000 }, { "name": "NodeAffinityPriority", "weight": 1 }, { "name": "TaintTolerationPriority", "weight": 1 }, { "argument": { "serviceAntiAffinity": { "label": "zone" } }, "name": "Zone", "weight": 2 } ] }, "changed": false } TASK [openshift_master_facts : Retrieve current scheduler config] *********************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:74 Wednesday 09 January 2019 16:00:24 +0100 (0:00:00.152) 0:20:58.743 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/net_tools/basics/slurp.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"content": "ewogICAgImFwaVZlcnNpb24iOiAidjEiLCAKICAgICJraW5kIjogIlBvbGljeSIsIAogICAgInByZWRpY2F0ZXMiOiBbCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJOb1ZvbHVtZVpvbmVDb25mbGljdCIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk1heEVCU1ZvbHVtZUNvdW50IgogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTWF4R0NFUERWb2x1bWVDb3VudCIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk1heEF6dXJlRGlza1ZvbHVtZUNvdW50IgogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTWF0Y2hJbnRlclBvZEFmZmluaXR5IgogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTm9EaXNrQ29uZmxpY3QiCiAgICAgICAgfSwgCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJHZW5lcmFsUHJlZGljYXRlcyIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIlBvZFRvbGVyYXRlc05vZGVUYWludHMiCiAgICAgICAgfSwgCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJDaGVja05vZGVNZW1vcnlQcmVzc3VyZSIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIkNoZWNrTm9kZURpc2tQcmVzc3VyZSIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIkNoZWNrVm9sdW1lQmluZGluZyIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJhcmd1bWVudCI6IHsKICAgICAgICAgICAgICAgICJzZXJ2aWNlQWZmaW5pdHkiOiB7CiAgICAgICAgICAgICAgICAgICAgImxhYmVscyI6IFsKICAgICAgICAgICAgICAgICAgICAgICAgInJlZ2lvbiIKICAgICAgICAgICAgICAgICAgICBdCiAgICAgICAgICAgICAgICB9CiAgICAgICAgICAgIH0sIAogICAgICAgICAgICAibmFtZSI6ICJSZWdpb24iCiAgICAgICAgfQogICAgXSwgCiAgICAicHJpb3JpdGllcyI6IFsKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIlNlbGVjdG9yU3ByZWFkUHJpb3JpdHkiLCAKICAgICAgICAgICAgIndlaWdodCI6IDEKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIkludGVyUG9kQWZmaW5pdHlQcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMQogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTGVhc3RSZXF1ZXN0ZWRQcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMQogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiQmFsYW5jZWRSZXNvdXJjZUFsbG9jYXRpb24iLCAKICAgICAgICAgICAgIndlaWdodCI6IDEKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk5vZGVQcmVmZXJBdm9pZFBvZHNQcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMTAwMDAKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk5vZGVBZmZpbml0eVByaW9yaXR5IiwgCiAgICAgICAgICAgICJ3ZWlnaHQiOiAxCiAgICAgICAgfSwgCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJUYWludFRvbGVyYXRpb25Qcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMQogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgImFyZ3VtZW50IjogewogICAgICAgICAgICAgICAgInNlcnZpY2VBbnRpQWZmaW5pdHkiOiB7CiAgICAgICAgICAgICAgICAgICAgImxhYmVsIjogInpvbmUiCiAgICAgICAgICAgICAgICB9CiAgICAgICAgICAgIH0sIAogICAgICAgICAgICAibmFtZSI6ICJab25lIiwgCiAgICAgICAgICAgICJ3ZWlnaHQiOiAyCiAgICAgICAgfQogICAgXQp9", "source": "/etc/origin/master/scheduler.json", "encoding": "base64", "invocation": {"module_args": {"src": "/etc/origin/master/scheduler.json"}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "content": "ewogICAgImFwaVZlcnNpb24iOiAidjEiLCAKICAgICJraW5kIjogIlBvbGljeSIsIAogICAgInByZWRpY2F0ZXMiOiBbCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJOb1ZvbHVtZVpvbmVDb25mbGljdCIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk1heEVCU1ZvbHVtZUNvdW50IgogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTWF4R0NFUERWb2x1bWVDb3VudCIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk1heEF6dXJlRGlza1ZvbHVtZUNvdW50IgogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTWF0Y2hJbnRlclBvZEFmZmluaXR5IgogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTm9EaXNrQ29uZmxpY3QiCiAgICAgICAgfSwgCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJHZW5lcmFsUHJlZGljYXRlcyIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIlBvZFRvbGVyYXRlc05vZGVUYWludHMiCiAgICAgICAgfSwgCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJDaGVja05vZGVNZW1vcnlQcmVzc3VyZSIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIkNoZWNrTm9kZURpc2tQcmVzc3VyZSIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIkNoZWNrVm9sdW1lQmluZGluZyIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJhcmd1bWVudCI6IHsKICAgICAgICAgICAgICAgICJzZXJ2aWNlQWZmaW5pdHkiOiB7CiAgICAgICAgICAgICAgICAgICAgImxhYmVscyI6IFsKICAgICAgICAgICAgICAgICAgICAgICAgInJlZ2lvbiIKICAgICAgICAgICAgICAgICAgICBdCiAgICAgICAgICAgICAgICB9CiAgICAgICAgICAgIH0sIAogICAgICAgICAgICAibmFtZSI6ICJSZWdpb24iCiAgICAgICAgfQogICAgXSwgCiAgICAicHJpb3JpdGllcyI6IFsKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIlNlbGVjdG9yU3ByZWFkUHJpb3JpdHkiLCAKICAgICAgICAgICAgIndlaWdodCI6IDEKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIkludGVyUG9kQWZmaW5pdHlQcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMQogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTGVhc3RSZXF1ZXN0ZWRQcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMQogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiQmFsYW5jZWRSZXNvdXJjZUFsbG9jYXRpb24iLCAKICAgICAgICAgICAgIndlaWdodCI6IDEKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk5vZGVQcmVmZXJBdm9pZFBvZHNQcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMTAwMDAKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk5vZGVBZmZpbml0eVByaW9yaXR5IiwgCiAgICAgICAgICAgICJ3ZWlnaHQiOiAxCiAgICAgICAgfSwgCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJUYWludFRvbGVyYXRpb25Qcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMQogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgImFyZ3VtZW50IjogewogICAgICAgICAgICAgICAgInNlcnZpY2VBbnRpQWZmaW5pdHkiOiB7CiAgICAgICAgICAgICAgICAgICAgImxhYmVsIjogInpvbmUiCiAgICAgICAgICAgICAgICB9CiAgICAgICAgICAgIH0sIAogICAgICAgICAgICAibmFtZSI6ICJab25lIiwgCiAgICAgICAgICAgICJ3ZWlnaHQiOiAyCiAgICAgICAgfQogICAgXQp9", "encoding": "base64", "invocation": { "module_args": { "src": "/etc/origin/master/scheduler.json" } }, "source": "/etc/origin/master/scheduler.json" } TASK [openshift_master_facts : Set openshift_master_scheduler_current_config] *********************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:79 Wednesday 09 January 2019 16:00:24 +0100 (0:00:00.316) 0:20:59.059 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_master_scheduler_current_config": { "apiVersion": "v1", "kind": "Policy", "predicates": [ { "name": "NoVolumeZoneConflict" }, { "name": "MaxEBSVolumeCount" }, { "name": "MaxGCEPDVolumeCount" }, { "name": "MaxAzureDiskVolumeCount" }, { "name": "MatchInterPodAffinity" }, { "name": "NoDiskConflict" }, { "name": "GeneralPredicates" }, { "name": "PodToleratesNodeTaints" }, { "name": "CheckNodeMemoryPressure" }, { "name": "CheckNodeDiskPressure" }, { "name": "CheckVolumeBinding" }, { "argument": { "serviceAffinity": { "labels": [ "region" ] } }, "name": "Region" } ], "priorities": [ { "name": "SelectorSpreadPriority", "weight": 1 }, { "name": "InterPodAffinityPriority", "weight": 1 }, { "name": "LeastRequestedPriority", "weight": 1 }, { "name": "BalancedResourceAllocation", "weight": 1 }, { "name": "NodePreferAvoidPodsPriority", "weight": 10000 }, { "name": "NodeAffinityPriority", "weight": 1 }, { "name": "TaintTolerationPriority", "weight": 1 }, { "argument": { "serviceAntiAffinity": { "label": "zone" } }, "name": "Zone", "weight": 2 } ] } }, "changed": false } TASK [openshift_master_facts : Test if scheduler config is readable] ******************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:83 Wednesday 09 January 2019 16:00:24 +0100 (0:00:00.157) 0:20:59.216 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_master_facts : Set current scheduler predicates and priorities] ********************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:88 Wednesday 09 January 2019 16:00:25 +0100 (0:00:00.125) 0:20:59.341 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "openshift_master_scheduler_current_predicates": [ { "name": "NoVolumeZoneConflict" }, { "name": "MaxEBSVolumeCount" }, { "name": "MaxGCEPDVolumeCount" }, { "name": "MaxAzureDiskVolumeCount" }, { "name": "MatchInterPodAffinity" }, { "name": "NoDiskConflict" }, { "name": "GeneralPredicates" }, { "name": "PodToleratesNodeTaints" }, { "name": "CheckNodeMemoryPressure" }, { "name": "CheckNodeDiskPressure" }, { "name": "CheckVolumeBinding" }, { "argument": { "serviceAffinity": { "labels": [ "region" ] } }, "name": "Region" } ], "openshift_master_scheduler_current_priorities": [ { "name": "SelectorSpreadPriority", "weight": 1 }, { "name": "InterPodAffinityPriority", "weight": 1 }, { "name": "LeastRequestedPriority", "weight": 1 }, { "name": "BalancedResourceAllocation", "weight": 1 }, { "name": "NodePreferAvoidPodsPriority", "weight": 10000 }, { "name": "NodeAffinityPriority", "weight": 1 }, { "name": "TaintTolerationPriority", "weight": 1 }, { "argument": { "serviceAntiAffinity": { "label": "zone" } }, "name": "Zone", "weight": 2 } ] }, "changed": false } TASK [openshift_control_plane : Check that origin image is present] ********************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/pre_pull.yml:2 Wednesday 09 January 2019 16:00:25 +0100 (0:00:00.158) 0:20:59.500 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:00:25.483471", "stdout": "96ee92cf05ea", "cmd": ["docker", "images", "-q", "registry.redhat.io/openshift3/ose-control-plane:v3.11"], "rc": 0, "start": "2019-01-09 16:00:25.444361", "stderr": "", "delta": "0:00:00.039110", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "docker images -q registry.redhat.io/openshift3/ose-control-plane:v3.11", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "docker", "images", "-q", "registry.redhat.io/openshift3/ose-control-plane:v3.11" ], "delta": "0:00:00.039110", "end": "2019-01-09 16:00:25.483471", "invocation": { "module_args": { "_raw_params": "docker images -q registry.redhat.io/openshift3/ose-control-plane:v3.11", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 16:00:25.444361", "stderr": "", "stderr_lines": [], "stdout": "96ee92cf05ea", "stdout_lines": [ "96ee92cf05ea" ] } TASK [openshift_control_plane : Pre-pull Origin image (docker)] ************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/pre_pull.yml:7 Wednesday 09 January 2019 16:00:25 +0100 (0:00:00.323) 0:20:59.823 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : Check status of control plane image pre-pull] *********************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/pre_pull_poll.yml:2 Wednesday 09 January 2019 16:00:25 +0100 (0:00:00.106) 0:20:59.929 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : Check status of etcd image pre-pull] ******************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/pre_pull_poll.yml:12 Wednesday 09 January 2019 16:00:25 +0100 (0:00:00.113) 0:21:00.043 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [configure vsphere svc account] **************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [openshift_cloud_provider : Check to see if the vsphere cluster role already exists] *********************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_cloud_provider/tasks/vsphere-svc.yml:2 Wednesday 09 January 2019 16:00:25 +0100 (0:00:00.121) 0:21:00.164 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_cloud_provider : Create svc acccount file] ****************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_cloud_provider/tasks/vsphere-svc.yml:7 Wednesday 09 January 2019 16:00:26 +0100 (0:00:00.117) 0:21:00.281 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_cloud_provider : Create vsphere-svc on cluster] ************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_cloud_provider/tasks/vsphere-svc.yml:14 Wednesday 09 January 2019 16:00:26 +0100 (0:00:00.111) 0:21:00.392 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_cloud_provider : Remove vsphere-svc file] ******************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_cloud_provider/tasks/vsphere-svc.yml:18 Wednesday 09 January 2019 16:00:26 +0100 (0:00:00.109) 0:21:00.501 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Upgrade master] ******************************************************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [debug] **************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:94 Wednesday 09 January 2019 16:00:26 +0100 (0:00:00.116) 0:21:00.617 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [include_tasks] ******************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:97 Wednesday 09 January 2019 16:00:26 +0100 (0:00:00.108) 0:21:00.726 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : Create credentials for oreg_url] ************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/registry_auth.yml:5 Wednesday 09 January 2019 16:00:26 +0100 (0:00:00.105) 0:21:00.832 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/docker_creds.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"username": "rhel_scanplus", "test_timeout": 20, "test_login": true, "tls_verify": true, "registry": "registry.redhat.io", "test_image": "openshift3/ose", "path": "/var/lib/origin/.docker", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "proxy_vars": " "}}, "changed": false, "rc": 0}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "path": "/var/lib/origin/.docker", "proxy_vars": " ", "registry": "registry.redhat.io", "test_image": "openshift3/ose", "test_login": true, "test_timeout": 20, "tls_verify": true, "username": "rhel_scanplus" } }, "rc": 0 } TASK [openshift_control_plane : Create credentials for any additional registries] ******************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/registry_auth.yml:22 Wednesday 09 January 2019 16:00:31 +0100 (0:00:04.945) 0:21:05.777 ***** TASK [openshift_control_plane : Copy static master scripts] ***************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/static_shim.yml:3 Wednesday 09 January 2019 16:00:31 +0100 (0:00:00.110) 0:21:05.887 ***** ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547046031.72-264686600944196 `" && echo ansible-tmp-1547046031.72-264686600944196="` echo /root/.ansible/tmp/ansible-tmp-1547046031.72-264686600944196 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547046031.72-264686600944196=/root/.ansible/tmp/ansible-tmp-1547046031.72-264686600944196\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/usr/local/bin/master-exec", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "us-ascii", "uid": 0, "exists": true, "attr_flags": "e", "woth": false, "isreg": true, "device_type": 0, "mtime": 1536865349.309609, "block_size": 4096, "inode": 1308224, "isgid": false, "size": 1100, "executable": true, "isuid": false, "readable": true, "version": "1173829568", "pw_name": "root", "gid": 0, "ischr": false, "wusr": false, "writeable": true, "mimetype": "text/x-shellscript", "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": false, "gr_name": "root", "path": "/usr/local/bin/master-exec", "xusr": true, "atime": 1547019653.819505, "isdir": false, "ctime": 1536865349.5206122, "isblk": false, "wgrp": false, "checksum": "e078962e4e6a8f78db166cabd4e3997cefd1e848", "dev": 64769, "roth": false, "isfifo": false, "mode": "0500", "xgrp": false, "rusr": true, "attributes": ["extents"]}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"group": "root", "uid": 0, "changed": false, "owner": "root", "state": "file", "gid": 0, "secontext": "system_u:object_r:bin_t:s0", "mode": "0500", "path": "/usr/local/bin/master-exec", "invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": "master-exec", "path": "/usr/local/bin/master-exec", "owner": null, "follow": true, "group": null, "unsafe_writes": null, "state": "file", "content": null, "serole": null, "setype": null, "dest": "/usr/local/bin/", "selevel": null, "regexp": null, "src": null, "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": 320, "attributes": null, "backup": null}}, "diff": {"after": {"path": "/usr/local/bin/master-exec"}, "before": {"path": "/usr/local/bin/master-exec"}}, "size": 1100}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547046031.72-264686600944196/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') ok: [sp-os-master01.os.ad.scanplus.de] => (item=scripts/docker/master-exec) => { "changed": false, "checksum": "e078962e4e6a8f78db166cabd4e3997cefd1e848", "dest": "/usr/local/bin/master-exec", "diff": { "after": { "path": "/usr/local/bin/master-exec" }, "before": { "path": "/usr/local/bin/master-exec" } }, "gid": 0, "group": "root", "invocation": { "module_args": { "_diff_peek": null, "_original_basename": "master-exec", "attributes": null, "backup": null, "content": null, "delimiter": null, "dest": "/usr/local/bin/", "directory_mode": null, "follow": true, "force": false, "group": null, "mode": 320, "owner": null, "path": "/usr/local/bin/master-exec", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "file", "unsafe_writes": null } }, "item": "scripts/docker/master-exec", "mode": "0500", "owner": "root", "path": "/usr/local/bin/master-exec", "secontext": "system_u:object_r:bin_t:s0", "size": 1100, "state": "file", "uid": 0 } ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547046032.13-162129079451226 `" && echo ansible-tmp-1547046032.13-162129079451226="` echo /root/.ansible/tmp/ansible-tmp-1547046032.13-162129079451226 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547046032.13-162129079451226=/root/.ansible/tmp/ansible-tmp-1547046032.13-162129079451226\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/usr/local/bin/master-logs", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "us-ascii", "uid": 0, "exists": true, "attr_flags": "e", "woth": false, "isreg": true, "device_type": 0, "mtime": 1536865349.8336174, "block_size": 4096, "inode": 1308225, "isgid": false, "size": 1112, "executable": true, "isuid": false, "readable": true, "version": "1173829582", "pw_name": "root", "gid": 0, "ischr": false, "wusr": false, "writeable": true, "mimetype": "text/x-shellscript", "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": false, "gr_name": "root", "path": "/usr/local/bin/master-logs", "xusr": true, "atime": 1547019881.284889, "isdir": false, "ctime": 1536865350.0466208, "isblk": false, "wgrp": false, "checksum": "b70b0e70e837bd2e240ee5a6184ca5ae289fb8d9", "dev": 64769, "roth": false, "isfifo": false, "mode": "0500", "xgrp": false, "rusr": true, "attributes": ["extents"]}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"group": "root", "uid": 0, "changed": false, "owner": "root", "state": "file", "gid": 0, "secontext": "system_u:object_r:bin_t:s0", "mode": "0500", "path": "/usr/local/bin/master-logs", "invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": "master-logs", "path": "/usr/local/bin/master-logs", "owner": null, "follow": true, "group": null, "unsafe_writes": null, "state": "file", "content": null, "serole": null, "setype": null, "dest": "/usr/local/bin/", "selevel": null, "regexp": null, "src": null, "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": 320, "attributes": null, "backup": null}}, "diff": {"after": {"path": "/usr/local/bin/master-logs"}, "before": {"path": "/usr/local/bin/master-logs"}}, "size": 1112}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547046032.13-162129079451226/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') ok: [sp-os-master01.os.ad.scanplus.de] => (item=scripts/docker/master-logs) => { "changed": false, "checksum": "b70b0e70e837bd2e240ee5a6184ca5ae289fb8d9", "dest": "/usr/local/bin/master-logs", "diff": { "after": { "path": "/usr/local/bin/master-logs" }, "before": { "path": "/usr/local/bin/master-logs" } }, "gid": 0, "group": "root", "invocation": { "module_args": { "_diff_peek": null, "_original_basename": "master-logs", "attributes": null, "backup": null, "content": null, "delimiter": null, "dest": "/usr/local/bin/", "directory_mode": null, "follow": true, "force": false, "group": null, "mode": 320, "owner": null, "path": "/usr/local/bin/master-logs", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "file", "unsafe_writes": null } }, "item": "scripts/docker/master-logs", "mode": "0500", "owner": "root", "path": "/usr/local/bin/master-logs", "secontext": "system_u:object_r:bin_t:s0", "size": 1112, "state": "file", "uid": 0 } ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547046032.52-116041490176093 `" && echo ansible-tmp-1547046032.52-116041490176093="` echo /root/.ansible/tmp/ansible-tmp-1547046032.52-116041490176093 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547046032.52-116041490176093=/root/.ansible/tmp/ansible-tmp-1547046032.52-116041490176093\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/usr/local/bin/master-restart", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "us-ascii", "uid": 0, "exists": true, "attr_flags": "e", "woth": false, "isreg": true, "device_type": 0, "mtime": 1536865350.425627, "block_size": 4096, "inode": 1308226, "isgid": false, "size": 1094, "executable": true, "isuid": false, "readable": true, "version": "1173829596", "pw_name": "root", "gid": 0, "ischr": false, "wusr": false, "writeable": true, "mimetype": "text/x-shellscript", "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": false, "gr_name": "root", "path": "/usr/local/bin/master-restart", "xusr": true, "atime": 1547019881.6628964, "isdir": false, "ctime": 1536865350.6466305, "isblk": false, "wgrp": false, "checksum": "efc4960858199e78fed6bc8c2779d4d2d3bb4f11", "dev": 64769, "roth": false, "isfifo": false, "mode": "0500", "xgrp": false, "rusr": true, "attributes": ["extents"]}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"group": "root", "uid": 0, "changed": false, "owner": "root", "state": "file", "gid": 0, "secontext": "system_u:object_r:bin_t:s0", "mode": "0500", "path": "/usr/local/bin/master-restart", "invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": "master-restart", "path": "/usr/local/bin/master-restart", "owner": null, "follow": true, "group": null, "unsafe_writes": null, "state": "file", "content": null, "serole": null, "setype": null, "dest": "/usr/local/bin/", "selevel": null, "regexp": null, "src": null, "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": 320, "attributes": null, "backup": null}}, "diff": {"after": {"path": "/usr/local/bin/master-restart"}, "before": {"path": "/usr/local/bin/master-restart"}}, "size": 1094}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547046032.52-116041490176093/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') ok: [sp-os-master01.os.ad.scanplus.de] => (item=scripts/docker/master-restart) => { "changed": false, "checksum": "efc4960858199e78fed6bc8c2779d4d2d3bb4f11", "dest": "/usr/local/bin/master-restart", "diff": { "after": { "path": "/usr/local/bin/master-restart" }, "before": { "path": "/usr/local/bin/master-restart" } }, "gid": 0, "group": "root", "invocation": { "module_args": { "_diff_peek": null, "_original_basename": "master-restart", "attributes": null, "backup": null, "content": null, "delimiter": null, "dest": "/usr/local/bin/", "directory_mode": null, "follow": true, "force": false, "group": null, "mode": 320, "owner": null, "path": "/usr/local/bin/master-restart", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "file", "unsafe_writes": null } }, "item": "scripts/docker/master-restart", "mode": "0500", "owner": "root", "path": "/usr/local/bin/master-restart", "secontext": "system_u:object_r:bin_t:s0", "size": 1094, "state": "file", "uid": 0 } TASK [openshift_control_plane : Ensure cri-tools installed] ***************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/static_shim.yml:15 Wednesday 09 January 2019 16:00:33 +0100 (0:00:01.348) 0:21:07.236 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : openshift_master_scheduler_predicates is defined] ******************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_predicates.yml:5 Wednesday 09 January 2019 16:00:33 +0100 (0:00:00.125) 0:21:07.362 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [openshift_control_plane : openshift_master_scheduler_predicates is set to defaults from an earlier release] *********************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_predicates.yml:9 Wednesday 09 January 2019 16:00:33 +0100 (0:00:00.117) 0:21:07.479 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [openshift_control_plane : openshift_master_scheduler_predicates does not match current defaults] ********************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_predicates.yml:14 Wednesday 09 January 2019 16:00:33 +0100 (0:00:00.114) 0:21:07.594 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [openshift_control_plane : openshift_master_scheduler_predicates is not defined] *************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_predicates.yml:22 Wednesday 09 January 2019 16:00:33 +0100 (0:00:00.108) 0:21:07.702 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "msg": "openshift_master_scheduler_predicates is not defined" } TASK [openshift_control_plane : existing scheduler config does not match previous known defaults] *************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_predicates.yml:26 Wednesday 09 January 2019 16:00:33 +0100 (0:00:00.146) 0:21:07.849 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [openshift_control_plane : set_fact openshift_upgrade_scheduler_predicates 1] ****************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_predicates.yml:33 Wednesday 09 January 2019 16:00:33 +0100 (0:00:00.110) 0:21:07.959 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : set_fact openshift_upgrade_scheduler_predicates 2] ****************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_predicates.yml:40 Wednesday 09 January 2019 16:00:33 +0100 (0:00:00.122) 0:21:08.081 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : openshift_master_scheduler_priorities is defined] ******************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_priorities.yml:5 Wednesday 09 January 2019 16:00:33 +0100 (0:00:00.133) 0:21:08.215 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [openshift_control_plane : openshift_master_scheduler_priorities is set to defaults from an earlier release of OpenShift] ********************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_priorities.yml:9 Wednesday 09 January 2019 16:00:34 +0100 (0:00:00.129) 0:21:08.344 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [openshift_control_plane : openshift_master_scheduler_priorities does not match current defaults] ********************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_priorities.yml:14 Wednesday 09 January 2019 16:00:34 +0100 (0:00:00.108) 0:21:08.452 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [openshift_control_plane : openshift_master_scheduler_priorities is not defined] *************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_priorities.yml:22 Wednesday 09 January 2019 16:00:34 +0100 (0:00:00.116) 0:21:08.569 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "msg": "openshift_master_scheduler_priorities is not defined" } TASK [openshift_control_plane : existing scheduler config does not match previous known defaults] *************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_priorities.yml:26 Wednesday 09 January 2019 16:00:34 +0100 (0:00:00.149) 0:21:08.719 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [openshift_control_plane : set_fact openshift_upgrade_scheduler_priorities 1] ****************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_priorities.yml:33 Wednesday 09 January 2019 16:00:34 +0100 (0:00:00.133) 0:21:08.852 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : set_fact openshift_upgrade_scheduler_priorities 2] ****************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_priorities.yml:40 Wednesday 09 January 2019 16:00:34 +0100 (0:00:00.121) 0:21:08.974 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : Update scheduler config] ******************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade/upgrade_scheduler.yml:9 Wednesday 09 January 2019 16:00:34 +0100 (0:00:00.148) 0:21:09.123 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : include_tasks] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade.yml:9 Wednesday 09 January 2019 16:00:35 +0100 (0:00:00.151) 0:21:09.274 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : Test local loopback context] **************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/set_loopback_context.yml:2 Wednesday 09 January 2019 16:00:35 +0100 (0:00:00.237) 0:21:09.512 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:00:35.610733", "stdout": "apiVersion: v1\\nclusters:\\n- cluster:\\n certificate-authority-data: REDACTED\\n server: https://sp-os-master01.os.ad.scanplus.de:8443\\n name: sp-os-master01-os-ad-scanplus-de:8443\\ncontexts:\\n- context:\\n cluster: sp-os-master01-os-ad-scanplus-de:8443\\n namespace: default\\n user: system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443\\n name: default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master\\ncurrent-context: default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master\\nkind: Config\\npreferences: {}\\nusers:\\n- name: system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443\\n user:\\n client-certificate-data: REDACTED\\n client-key-data: REDACTED", "cmd": ["oc", "config", "view", "--config=/etc/origin/master/openshift-master.kubeconfig"], "rc": 0, "start": "2019-01-09 16:00:35.456443", "stderr": "", "delta": "0:00:00.154290", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc config view --config=/etc/origin/master/openshift-master.kubeconfig", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "cmd": [ "oc", "config", "view", "--config=/etc/origin/master/openshift-master.kubeconfig" ], "delta": "0:00:00.154290", "end": "2019-01-09 16:00:35.610733", "invocation": { "module_args": { "_raw_params": "oc config view --config=/etc/origin/master/openshift-master.kubeconfig", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 16:00:35.456443", "stderr": "", "stderr_lines": [], "stdout": "apiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://sp-os-master01.os.ad.scanplus.de:8443\n name: sp-os-master01-os-ad-scanplus-de:8443\ncontexts:\n- context:\n cluster: sp-os-master01-os-ad-scanplus-de:8443\n namespace: default\n user: system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443\n name: default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master\ncurrent-context: default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master\nkind: Config\npreferences: {}\nusers:\n- name: system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED", "stdout_lines": [ "apiVersion: v1", "clusters:", "- cluster:", " certificate-authority-data: REDACTED", " server: https://sp-os-master01.os.ad.scanplus.de:8443", " name: sp-os-master01-os-ad-scanplus-de:8443", "contexts:", "- context:", " cluster: sp-os-master01-os-ad-scanplus-de:8443", " namespace: default", " user: system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443", " name: default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master", "current-context: default/sp-os-master01-os-ad-scanplus-de:8443/system:openshift-master", "kind: Config", "preferences: {}", "users:", "- name: system:openshift-master/sp-os-master01-os-ad-scanplus-de:8443", " user:", " client-certificate-data: REDACTED", " client-key-data: REDACTED" ] } TASK [openshift_control_plane : command] ************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/set_loopback_context.yml:9 Wednesday 09 January 2019 16:00:35 +0100 (0:00:00.447) 0:21:09.960 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : command] ************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/set_loopback_context.yml:19 Wednesday 09 January 2019 16:00:35 +0100 (0:00:00.125) 0:21:10.085 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : command] ************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/set_loopback_context.yml:29 Wednesday 09 January 2019 16:00:35 +0100 (0:00:00.115) 0:21:10.201 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : Check for ca-bundle.crt] ******************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade.yml:14 Wednesday 09 January 2019 16:00:36 +0100 (0:00:00.107) 0:21:10.309 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/etc/origin/master/ca-bundle.crt", "get_md5": null, "get_mime": false, "get_attributes": false}}, "stat": {"uid": 0, "exists": true, "woth": false, "device_type": 0, "mtime": 1517401934.4273548, "block_size": 4096, "inode": 397660, "isgid": false, "size": 3055, "wgrp": false, "executable": false, "isuid": false, "readable": true, "isreg": true, "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/etc/origin/master/ca-bundle.crt", "xusr": false, "atime": 1547018014.8078556, "isdir": false, "ctime": 1517401934.4273548, "isblk": false, "xgrp": false, "dev": 64769, "roth": true, "isfifo": false, "mode": "0644", "rusr": true}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "failed_when_result": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": false, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/etc/origin/master/ca-bundle.crt" } }, "stat": { "atime": 1547018014.8078556, "block_size": 4096, "blocks": 8, "ctime": 1517401934.4273548, "dev": 64769, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 397660, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1517401934.4273548, "nlink": 1, "path": "/etc/origin/master/ca-bundle.crt", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 3055, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [openshift_control_plane : Check for ca.crt] *************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade.yml:23 Wednesday 09 January 2019 16:00:36 +0100 (0:00:00.295) 0:21:10.605 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": false, "follow": false, "path": "/etc/origin/master/ca.crt", "get_md5": null, "get_mime": false, "get_attributes": true}}, "stat": {"uid": 0, "exists": true, "attr_flags": "e", "woth": false, "device_type": 0, "mtime": 1517401934.316353, "block_size": 4096, "inode": 397147, "isgid": false, "size": 1070, "wgrp": false, "executable": false, "isuid": false, "readable": true, "isreg": true, "version": "18446744072088114083", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/etc/origin/master/ca.crt", "xusr": false, "atime": 1547018001.2565944, "isdir": false, "ctime": 1517401934.316353, "isblk": false, "xgrp": false, "dev": 64769, "roth": true, "isfifo": false, "mode": "0644", "rusr": true, "attributes": ["extents"]}, "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "failed_when_result": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_attributes": true, "get_checksum": false, "get_md5": null, "get_mime": false, "path": "/etc/origin/master/ca.crt" } }, "stat": { "atime": 1547018001.2565944, "attr_flags": "e", "attributes": [ "extents" ], "block_size": 4096, "blocks": 8, "ctime": 1517401934.316353, "dev": 64769, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 397147, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1517401934.316353, "nlink": 1, "path": "/etc/origin/master/ca.crt", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1070, "uid": 0, "version": "18446744072088114083", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [openshift_control_plane : Migrate ca.crt to ca-bundle.crt] ************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade.yml:31 Wednesday 09 January 2019 16:00:36 +0100 (0:00:00.282) 0:21:10.887 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : Link ca.crt to ca-bundle.crt] *************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade.yml:37 Wednesday 09 January 2019 16:00:36 +0100 (0:00:00.122) 0:21:11.010 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : Update imagePolicyConfig.internalRegistryHostname] ****************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade.yml:47 Wednesday 09 January 2019 16:00:36 +0100 (0:00:00.120) 0:21:11.130 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "imagePolicyConfig.internalRegistryHostname", "src": "/etc/origin/master/master-config.yaml", "backup": false, "update": false, "value": "docker-registry.default.svc:5000", "backup_ext": ".20190109T160037", "curr_value_format": "yaml", "edits": null, "state": "present", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "present", "changed": false, "result": []}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T160037", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": null, "index": null, "key": "imagePolicyConfig.internalRegistryHostname", "separator": ".", "src": "/etc/origin/master/master-config.yaml", "state": "present", "update": false, "value": "docker-registry.default.svc:5000", "value_type": "" } }, "result": [], "state": "present" } TASK [openshift_control_plane : Update oreg value] ************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade.yml:53 Wednesday 09 January 2019 16:00:37 +0100 (0:00:00.395) 0:21:11.526 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "imageConfig.format", "src": "/etc/origin/master/master-config.yaml", "backup": false, "update": false, "value": "registry.redhat.io/openshift3/ose-${component}:${version}", "backup_ext": ".20190109T160037", "curr_value_format": "yaml", "edits": null, "state": "present", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "present", "changed": false, "result": []}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T160037", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": null, "index": null, "key": "imageConfig.format", "separator": ".", "src": "/etc/origin/master/master-config.yaml", "state": "present", "update": false, "value": "registry.redhat.io/openshift3/ose-${component}:${version}", "value_type": "" } }, "result": [], "state": "present" } TASK [openshift_control_plane : Change default node selector to compute=true] *********************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade.yml:59 Wednesday 09 January 2019 16:00:37 +0100 (0:00:00.372) 0:21:11.899 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "projectConfig.defaultNodeSelector", "src": "/etc/origin/master/master-config.yaml", "backup": false, "update": false, "value": "nodeusage=dev", "backup_ext": ".20190109T160038", "curr_value_format": "yaml", "edits": null, "state": "present", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "present", "changed": false, "result": []}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T160038", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": null, "index": null, "key": "projectConfig.defaultNodeSelector", "separator": ".", "src": "/etc/origin/master/master-config.yaml", "state": "present", "update": false, "value": "nodeusage=dev", "value_type": "" } }, "result": [], "state": "present" } TASK [openshift_control_plane : Remove use of pod presets from master config] *********************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade.yml:66 Wednesday 09 January 2019 16:00:38 +0100 (0:00:00.585) 0:21:12.485 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "admissionConfig.pluginConfig.PodPreset", "src": "/etc/origin/master/master-config.yaml", "backup": false, "update": false, "value": null, "backup_ext": ".20190109T160038", "curr_value_format": "yaml", "edits": null, "state": "absent", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "absent", "changed": false, "result": {"masterClients": {"externalKubernetesKubeConfig": "", "externalKubernetesClientConnectionOverrides": {"qps": 200, "contentType": "application/vnd.kubernetes.protobuf", "acceptContentTypes": "application/vnd.kubernetes.protobuf,application/json", "burst": 400}, "openshiftLoopbackKubeConfig": "openshift-master.kubeconfig", "openshiftLoopbackClientConnectionOverrides": {"qps": 300, "contentType": "application/vnd.kubernetes.protobuf", "acceptContentTypes": "application/vnd.kubernetes.protobuf,application/json", "burst": 600}}, "policyConfig": {"bootstrapPolicyFile": "/etc/origin/master/policy.json", "openshiftInfrastructureNamespace": "openshift-infra", "openshiftSharedResourcesNamespace": "openshift"}, "imagePolicyConfig": {"MaxScheduledImageImportsPerMinute": 10, "ScheduledImageImportMinimumIntervalSeconds": 1800, "disableScheduledImport": false, "internalRegistryHostname": "docker-registry.default.svc:5000", "maxImagesBulkImportedPerRepository": 3}, "etcdStorageConfig": {"openShiftStorageVersion": "v1", "kubernetesStoragePrefix": "kubernetes.io", "kubernetesStorageVersion": "v1", "openShiftStoragePrefix": "openshift.io"}, "servingInfo": {"namedCertificates": [{"keyFile": "/etc/origin/master/named_certificates/cert.key", "certFile": "/etc/origin/master/named_certificates/cert.crt", "names": ["os.ad.scanplus.de"]}], "certFile": "master.server.crt", "bindAddress": "0.0.0.0:8443", "bindNetwork": "tcp4", "maxRequestsInFlight": 500, "keyFile": "master.server.key", "clientCA": "ca.crt", "requestTimeoutSeconds": 3600}, "oauthConfig": {"grantConfig": {"method": "auto"}, "tokenConfig": {"accessTokenMaxAgeSeconds": 86400, "authorizeTokenMaxAgeSeconds": 500}, "masterPublicURL": "https://os.ad.scanplus.de:8443", "assetPublicURL": "https://os.ad.scanplus.de:8443/console/", "servingInfo": {"namedCertificates": [{"keyFile": "/etc/origin/master/named_certificates/cert.key", "certFile": "/etc/origin/master/named_certificates/cert.crt", "names": ["os.ad.scanplus.de"]}]}, "sessionConfig": {"sessionMaxAgeSeconds": 3600, "sessionName": "ssn", "sessionSecretsFile": "/etc/origin/master/session-secrets.yaml"}, "masterURL": "https://sp-os-master01.os.ad.scanplus.de:8443", "masterCA": "ca-bundle.crt", "identityProviders": [{"challenge": true, "provider": {"bindDN": "CN=osLdapReader,OU=Openshift,OU=ServiceUsers,OU=ScanPlus,DC=ad,DC=scanplus,DC=de", "kind": "LDAPPasswordIdentityProvider", "bindPassword": "3UAL.dMJI4!b", "url": "ldap://SP-DC01.ad.scanplus.de/OU=ScanPlus,DC=ad,DC=scanplus,DC=de?sAMAccountName?sub?(memberOf=cn=OpenshiftUsers,OU=Openshift,OU=Groups,OU=ScanPlus,DC=ad,DC=scanplus,DC=de)", "insecure": true, "apiVersion": "v1", "attributes": {"preferredUsername": ["sAMAccountName"], "id": ["sAMAccountName"], "name": ["cn"], "email": ["mail"]}}, "login": true, "name": "RH_IPA_LDAP_Auth"}]}, "networkConfig": {"externalIPNetworkCIDRs": ["0.0.0.0/0"], "clusterNetworks": [{"hostSubnetLength": 9, "cidr": "172.18.0.0/17"}], "serviceNetworkCIDR": "172.18.128.0/17", "networkPluginName": "redhat/openshift-ovs-multitenant"}, "kubeletClientInfo": {"keyFile": "master.kubelet-client.key", "ca": "ca-bundle.crt", "certFile": "master.kubelet-client.crt", "port": 10250}, "authConfig": {"requestHeader": {"usernameHeaders": ["X-Remote-User"], "extraHeaderPrefixes": ["X-Remote-Extra-"], "clientCA": "front-proxy-ca.crt", "groupHeaders": ["X-Remote-Group"], "clientCommonNames": ["aggregator-front-proxy"]}}, "serviceAccountConfig": {"masterCA": "ca-bundle.crt", "privateKeyFile": "serviceaccounts.private.key", "publicKeyFiles": ["serviceaccounts.public.key"], "managedNames": ["default", "builder", "deployer"], "limitSecretReferences": false}, "etcdClientInfo": {"keyFile": "master.etcd-client.key", "ca": "master.etcd-ca.crt", "certFile": "master.etcd-client.crt", "urls": ["https://sp-os-master01.os.ad.scanplus.de:2379"]}, "routingConfig": {"subdomain": "apps.os.ad.scanplus.de"}, "imageConfig": {"latest": false, "format": "registry.redhat.io/openshift3/ose-${component}:${version}"}, "projectConfig": {"projectRequestTemplate": "", "securityAllocator": {"uidAllocatorRange": "1000000000-1999999999/10000", "mcsAllocatorRange": "s0:/2", "mcsLabelsPerProject": 5}, "projectRequestMessage": "", "defaultNodeSelector": "nodeusage=dev"}, "admissionConfig": {"pluginConfig": {"BuildDefaults": {"configuration": {"kind": "BuildDefaultsConfig", "resources": {"requests": {}, "limits": {}}, "env": [], "apiVersion": "v1"}}, "BuildOverrides": {"configuration": {"kind": "BuildOverridesConfig", "apiVersion": "v1"}}, "openshift.io/ImagePolicy": {"configuration": {"kind": "ImagePolicyConfig", "executionRules": [{"skipOnResolutionFailure": true, "matchImageAnnotations": [{"value": "true", "key": "images.openshift.io/deny-execution"}], "reject": true, "name": "execution-denied", "onResources": [{"resource": "pods"}, {"resource": "builds"}]}], "apiVersion": "v1"}}}}, "masterPublicURL": "https://os.ad.scanplus.de:8443", "volumeConfig": {"dynamicProvisioningEnabled": true}, "pauseControllers": false, "apiLevels": ["v1"], "corsAllowedOrigins": ["(?i)//127\\\\.0\\\\.0\\\\.1(:|\\\\z)", "(?i)//localhost(:|\\\\z)", "(?i)//172\\\\.30\\\\.80\\\\.240(:|\\\\z)", "(?i)//kubernetes\\\\.default(:|\\\\z)", "(?i)//kubernetes\\\\.default\\\\.svc\\\\.cluster\\\\.local(:|\\\\z)", "(?i)//kubernetes(:|\\\\z)", "(?i)//openshift\\\\.default(:|\\\\z)", "(?i)//172\\\\.18\\\\.128\\\\.1(:|\\\\z)", "(?i)//sp\\\\-os\\\\-master01\\\\.os\\\\.ad\\\\.scanplus\\\\.de(:|\\\\z)", "(?i)//openshift\\\\.default\\\\.svc(:|\\\\z)", "(?i)//openshift\\\\.default\\\\.svc\\\\.cluster\\\\.local(:|\\\\z)", "(?i)//kubernetes\\\\.default\\\\.svc(:|\\\\z)", "(?i)//openshift(:|\\\\z)"], "aggregatorConfig": {"proxyClientInfo": {"keyFile": "aggregator-front-proxy.key", "certFile": "aggregator-front-proxy.crt"}}, "controllerConfig": {"serviceServingCert": {"signer": {"keyFile": "service-signer.key", "certFile": "service-signer.crt"}}, "election": {"lockName": "openshift-master-controllers"}}, "kind": "MasterConfig", "dnsConfig": {"bindAddress": "0.0.0.0:8053", "bindNetwork": "tcp4"}, "apiVersion": "v1", "controllers": "*", "kubernetesMasterConfig": {"podEvictionTimeout": null, "masterIP": "172.30.80.240", "servicesNodePortRange": "", "apiServerArguments": {"storage-backend": ["etcd3"], "runtime-config": [], "storage-media-type": ["application/vnd.kubernetes.protobuf"]}, "schedulerArguments": null, "staticNodeNames": [], "proxyClientInfo": {"keyFile": "master.proxy-client.key", "certFile": "master.proxy-client.crt"}, "controllerArguments": {"pv-recycler-pod-template-filepath-nfs": ["/etc/origin/master/recycler_pod.yaml"], "cluster-signing-key-file": ["/etc/origin/master/ca.key"], "pv-recycler-pod-template-filepath-hostpath": ["/etc/origin/master/recycler_pod.yaml"], "cluster-signing-cert-file": ["/etc/origin/master/ca.crt"]}, "schedulerConfigFile": "/etc/origin/master/scheduler.json", "servicesSubnet": "172.18.128.0/17", "masterCount": 1}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T160038", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": null, "index": null, "key": "admissionConfig.pluginConfig.PodPreset", "separator": ".", "src": "/etc/origin/master/master-config.yaml", "state": "absent", "update": false, "value": null, "value_type": "" } }, "result": { "admissionConfig": { "pluginConfig": { "BuildDefaults": { "configuration": { "apiVersion": "v1", "env": [], "kind": "BuildDefaultsConfig", "resources": { "limits": {}, "requests": {} } } }, "BuildOverrides": { "configuration": { "apiVersion": "v1", "kind": "BuildOverridesConfig" } }, "openshift.io/ImagePolicy": { "configuration": { "apiVersion": "v1", "executionRules": [ { "matchImageAnnotations": [ { "key": "images.openshift.io/deny-execution", "value": "true" } ], "name": "execution-denied", "onResources": [ { "resource": "pods" }, { "resource": "builds" } ], "reject": true, "skipOnResolutionFailure": true } ], "kind": "ImagePolicyConfig" } } } }, "aggregatorConfig": { "proxyClientInfo": { "certFile": "aggregator-front-proxy.crt", "keyFile": "aggregator-front-proxy.key" } }, "apiLevels": [ "v1" ], "apiVersion": "v1", "authConfig": { "requestHeader": { "clientCA": "front-proxy-ca.crt", "clientCommonNames": [ "aggregator-front-proxy" ], "extraHeaderPrefixes": [ "X-Remote-Extra-" ], "groupHeaders": [ "X-Remote-Group" ], "usernameHeaders": [ "X-Remote-User" ] } }, "controllerConfig": { "election": { "lockName": "openshift-master-controllers" }, "serviceServingCert": { "signer": { "certFile": "service-signer.crt", "keyFile": "service-signer.key" } } }, "controllers": "*", "corsAllowedOrigins": [ "(?i)//127\\.0\\.0\\.1(:|\\z)", "(?i)//localhost(:|\\z)", "(?i)//172\\.30\\.80\\.240(:|\\z)", "(?i)//kubernetes\\.default(:|\\z)", "(?i)//kubernetes\\.default\\.svc\\.cluster\\.local(:|\\z)", "(?i)//kubernetes(:|\\z)", "(?i)//openshift\\.default(:|\\z)", "(?i)//172\\.18\\.128\\.1(:|\\z)", "(?i)//sp\\-os\\-master01\\.os\\.ad\\.scanplus\\.de(:|\\z)", "(?i)//openshift\\.default\\.svc(:|\\z)", "(?i)//openshift\\.default\\.svc\\.cluster\\.local(:|\\z)", "(?i)//kubernetes\\.default\\.svc(:|\\z)", "(?i)//openshift(:|\\z)" ], "dnsConfig": { "bindAddress": "0.0.0.0:8053", "bindNetwork": "tcp4" }, "etcdClientInfo": { "ca": "master.etcd-ca.crt", "certFile": "master.etcd-client.crt", "keyFile": "master.etcd-client.key", "urls": [ "https://sp-os-master01.os.ad.scanplus.de:2379" ] }, "etcdStorageConfig": { "kubernetesStoragePrefix": "kubernetes.io", "kubernetesStorageVersion": "v1", "openShiftStoragePrefix": "openshift.io", "openShiftStorageVersion": "v1" }, "imageConfig": { "format": "registry.redhat.io/openshift3/ose-${component}:${version}", "latest": false }, "imagePolicyConfig": { "MaxScheduledImageImportsPerMinute": 10, "ScheduledImageImportMinimumIntervalSeconds": 1800, "disableScheduledImport": false, "internalRegistryHostname": "docker-registry.default.svc:5000", "maxImagesBulkImportedPerRepository": 3 }, "kind": "MasterConfig", "kubeletClientInfo": { "ca": "ca-bundle.crt", "certFile": "master.kubelet-client.crt", "keyFile": "master.kubelet-client.key", "port": 10250 }, "kubernetesMasterConfig": { "apiServerArguments": { "runtime-config": [], "storage-backend": [ "etcd3" ], "storage-media-type": [ "application/vnd.kubernetes.protobuf" ] }, "controllerArguments": { "cluster-signing-cert-file": [ "/etc/origin/master/ca.crt" ], "cluster-signing-key-file": [ "/etc/origin/master/ca.key" ], "pv-recycler-pod-template-filepath-hostpath": [ "/etc/origin/master/recycler_pod.yaml" ], "pv-recycler-pod-template-filepath-nfs": [ "/etc/origin/master/recycler_pod.yaml" ] }, "masterCount": 1, "masterIP": "172.30.80.240", "podEvictionTimeout": null, "proxyClientInfo": { "certFile": "master.proxy-client.crt", "keyFile": "master.proxy-client.key" }, "schedulerArguments": null, "schedulerConfigFile": "/etc/origin/master/scheduler.json", "servicesNodePortRange": "", "servicesSubnet": "172.18.128.0/17", "staticNodeNames": [] }, "masterClients": { "externalKubernetesClientConnectionOverrides": { "acceptContentTypes": "application/vnd.kubernetes.protobuf,application/json", "burst": 400, "contentType": "application/vnd.kubernetes.protobuf", "qps": 200 }, "externalKubernetesKubeConfig": "", "openshiftLoopbackClientConnectionOverrides": { "acceptContentTypes": "application/vnd.kubernetes.protobuf,application/json", "burst": 600, "contentType": "application/vnd.kubernetes.protobuf", "qps": 300 }, "openshiftLoopbackKubeConfig": "openshift-master.kubeconfig" }, "masterPublicURL": "https://os.ad.scanplus.de:8443", "networkConfig": { "clusterNetworks": [ { "cidr": "172.18.0.0/17", "hostSubnetLength": 9 } ], "externalIPNetworkCIDRs": [ "0.0.0.0/0" ], "networkPluginName": "redhat/openshift-ovs-multitenant", "serviceNetworkCIDR": "172.18.128.0/17" }, "oauthConfig": { "assetPublicURL": "https://os.ad.scanplus.de:8443/console/", "grantConfig": { "method": "auto" }, "identityProviders": [ { "challenge": true, "login": true, "name": "RH_IPA_LDAP_Auth", "provider": { "apiVersion": "v1", "attributes": { "email": [ "mail" ], "id": [ "sAMAccountName" ], "name": [ "cn" ], "preferredUsername": [ "sAMAccountName" ] }, "bindDN": "CN=osLdapReader,OU=Openshift,OU=ServiceUsers,OU=ScanPlus,DC=ad,DC=scanplus,DC=de", "bindPassword": "3UAL.dMJI4!b", "insecure": true, "kind": "LDAPPasswordIdentityProvider", "url": "ldap://SP-DC01.ad.scanplus.de/OU=ScanPlus,DC=ad,DC=scanplus,DC=de?sAMAccountName?sub?(memberOf=cn=OpenshiftUsers,OU=Openshift,OU=Groups,OU=ScanPlus,DC=ad,DC=scanplus,DC=de)" } } ], "masterCA": "ca-bundle.crt", "masterPublicURL": "https://os.ad.scanplus.de:8443", "masterURL": "https://sp-os-master01.os.ad.scanplus.de:8443", "servingInfo": { "namedCertificates": [ { "certFile": "/etc/origin/master/named_certificates/cert.crt", "keyFile": "/etc/origin/master/named_certificates/cert.key", "names": [ "os.ad.scanplus.de" ] } ] }, "sessionConfig": { "sessionMaxAgeSeconds": 3600, "sessionName": "ssn", "sessionSecretsFile": "/etc/origin/master/session-secrets.yaml" }, "tokenConfig": { "accessTokenMaxAgeSeconds": 86400, "authorizeTokenMaxAgeSeconds": 500 } }, "pauseControllers": false, "policyConfig": { "bootstrapPolicyFile": "/etc/origin/master/policy.json", "openshiftInfrastructureNamespace": "openshift-infra", "openshiftSharedResourcesNamespace": "openshift" }, "projectConfig": { "defaultNodeSelector": "nodeusage=dev", "projectRequestMessage": "", "projectRequestTemplate": "", "securityAllocator": { "mcsAllocatorRange": "s0:/2", "mcsLabelsPerProject": 5, "uidAllocatorRange": "1000000000-1999999999/10000" } }, "routingConfig": { "subdomain": "apps.os.ad.scanplus.de" }, "serviceAccountConfig": { "limitSecretReferences": false, "managedNames": [ "default", "builder", "deployer" ], "masterCA": "ca-bundle.crt", "privateKeyFile": "serviceaccounts.private.key", "publicKeyFiles": [ "serviceaccounts.public.key" ] }, "servingInfo": { "bindAddress": "0.0.0.0:8443", "bindNetwork": "tcp4", "certFile": "master.server.crt", "clientCA": "ca.crt", "keyFile": "master.server.key", "maxRequestsInFlight": 500, "namedCertificates": [ { "certFile": "/etc/origin/master/named_certificates/cert.crt", "keyFile": "/etc/origin/master/named_certificates/cert.key", "names": [ "os.ad.scanplus.de" ] } ], "requestTimeoutSeconds": 3600 }, "volumeConfig": { "dynamicProvisioningEnabled": true } }, "state": "absent" } TASK [openshift_control_plane : Find current value for runtime-config] ****************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade.yml:71 Wednesday 09 January 2019 16:00:38 +0100 (0:00:00.372) 0:21:12.857 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "kubernetesMasterConfig.apiServerArguments.runtime-config", "src": "/etc/origin/master/master-config.yaml", "backup": false, "update": false, "value": null, "backup_ext": ".20190109T160038", "curr_value_format": "yaml", "edits": null, "state": "list", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "list", "changed": false, "result": []}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T160038", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": null, "index": null, "key": "kubernetesMasterConfig.apiServerArguments.runtime-config", "separator": ".", "src": "/etc/origin/master/master-config.yaml", "state": "list", "update": false, "value": null, "value_type": "" } }, "result": [], "state": "list" } TASK [openshift_control_plane : Set the runtime-config to exclude pod presets] ********************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade.yml:77 Wednesday 09 January 2019 16:00:38 +0100 (0:00:00.365) 0:21:13.222 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "kubernetesMasterConfig.apiServerArguments.runtime-config", "src": "/etc/origin/master/master-config.yaml", "backup": false, "update": false, "value": "[]", "backup_ext": ".20190109T160039", "curr_value_format": "yaml", "edits": null, "state": "present", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "present", "changed": false, "result": []}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T160039", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": null, "index": null, "key": "kubernetesMasterConfig.apiServerArguments.runtime-config", "separator": ".", "src": "/etc/origin/master/master-config.yaml", "state": "present", "update": false, "value": "[]", "value_type": "" } }, "result": [], "state": "present" } TASK [openshift_control_plane : Copy recyler pod to config directory] ******************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade.yml:84 Wednesday 09 January 2019 16:00:39 +0100 (0:00:00.386) 0:21:13.608 ***** ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547046039.42-168192460008500 `" && echo ansible-tmp-1547046039.42-168192460008500="` echo /root/.ansible/tmp/ansible-tmp-1547046039.42-168192460008500 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547046039.42-168192460008500=/root/.ansible/tmp/ansible-tmp-1547046039.42-168192460008500\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/etc/origin/master/recycler_pod.yaml", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "us-ascii", "uid": 0, "exists": true, "attr_flags": "e", "woth": false, "isreg": true, "device_type": 0, "mtime": 1547019888.5640295, "block_size": 4096, "inode": 1177905, "isgid": false, "size": 557, "executable": false, "isuid": false, "readable": true, "version": "18446744073061000708", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "mimetype": "text/plain", "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/etc/origin/master/recycler_pod.yaml", "xusr": false, "atime": 1547019929.58782, "isdir": false, "ctime": 1547019888.7250326, "isblk": false, "wgrp": false, "checksum": "e294163edfcba7dab4fef3439ccca83e89dbb67d", "dev": 64769, "roth": true, "isfifo": false, "mode": "0644", "xgrp": false, "rusr": true, "attributes": ["extents"]}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"group": "root", "uid": 0, "changed": false, "owner": "root", "state": "file", "gid": 0, "secontext": "system_u:object_r:etc_t:s0", "mode": "0644", "path": "/etc/origin/master/recycler_pod.yaml", "invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": "recycler_pod.yaml.j2", "path": "/etc/origin/master/recycler_pod.yaml", "owner": null, "follow": false, "group": null, "unsafe_writes": null, "state": "file", "content": null, "serole": null, "setype": null, "dest": "/etc/origin/master/recycler_pod.yaml", "selevel": null, "regexp": null, "src": null, "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "diff": {"after": {"path": "/etc/origin/master/recycler_pod.yaml"}, "before": {"path": "/etc/origin/master/recycler_pod.yaml"}}, "size": 557}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547046039.42-168192460008500/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "checksum": "e294163edfcba7dab4fef3439ccca83e89dbb67d", "dest": "/etc/origin/master/recycler_pod.yaml", "diff": { "after": { "path": "/etc/origin/master/recycler_pod.yaml" }, "before": { "path": "/etc/origin/master/recycler_pod.yaml" } }, "gid": 0, "group": "root", "invocation": { "module_args": { "_diff_peek": null, "_original_basename": "recycler_pod.yaml.j2", "attributes": null, "backup": null, "content": null, "delimiter": null, "dest": "/etc/origin/master/recycler_pod.yaml", "directory_mode": null, "follow": false, "force": false, "group": null, "mode": null, "owner": null, "path": "/etc/origin/master/recycler_pod.yaml", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "file", "unsafe_writes": null } }, "mode": "0644", "owner": "root", "path": "/etc/origin/master/recycler_pod.yaml", "secontext": "system_u:object_r:etc_t:s0", "size": 557, "state": "file", "uid": 0 } TASK [openshift_control_plane : Update controller-manager to have nfs recycler pod] ***************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade.yml:88 Wednesday 09 January 2019 16:00:39 +0100 (0:00:00.560) 0:21:14.169 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "kubernetesMasterConfig.controllerArguments.pv-recycler-pod-template-filepath-nfs", "src": "/etc/origin/master/master-config.yaml", "backup": false, "update": false, "value": "[\'/etc/origin/master/recycler_pod.yaml\']", "backup_ext": ".20190109T160040", "curr_value_format": "yaml", "edits": null, "state": "present", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "present", "changed": false, "result": []}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T160040", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": null, "index": null, "key": "kubernetesMasterConfig.controllerArguments.pv-recycler-pod-template-filepath-nfs", "separator": ".", "src": "/etc/origin/master/master-config.yaml", "state": "present", "update": false, "value": "['/etc/origin/master/recycler_pod.yaml']", "value_type": "" } }, "result": [], "state": "present" } TASK [openshift_control_plane : Update controller-manager to have hostpath recycler pod] ************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/upgrade.yml:94 Wednesday 09 January 2019 16:00:40 +0100 (0:00:00.395) 0:21:14.565 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "kubernetesMasterConfig.controllerArguments.pv-recycler-pod-template-filepath-hostpath", "src": "/etc/origin/master/master-config.yaml", "backup": false, "update": false, "value": "[\'/etc/origin/master/recycler_pod.yaml\']", "backup_ext": ".20190109T160040", "curr_value_format": "yaml", "edits": null, "state": "present", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "present", "changed": false, "result": []}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T160040", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": null, "index": null, "key": "kubernetesMasterConfig.controllerArguments.pv-recycler-pod-template-filepath-hostpath", "separator": ".", "src": "/etc/origin/master/master-config.yaml", "state": "present", "update": false, "value": "['/etc/origin/master/recycler_pod.yaml']", "value_type": "" } }, "result": [], "state": "present" } TASK [openshift_cloud_provider : modify controller args] ******************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_cloud_provider/tasks/update-vsphere.yml:2 Wednesday 09 January 2019 16:00:40 +0100 (0:00:00.396) 0:21:14.962 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [debug] **************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:113 Wednesday 09 January 2019 16:00:40 +0100 (0:00:00.099) 0:21:15.061 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [include_tasks] ******************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:116 Wednesday 09 January 2019 16:00:41 +0100 (0:00:00.229) 0:21:15.291 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : Enable bootstrapping in the master config] ************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:2 Wednesday 09 January 2019 16:00:41 +0100 (0:00:00.124) 0:21:15.415 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "", "src": "/etc/origin/master/master-config.yaml", "backup": false, "update": false, "value": null, "backup_ext": ".20190109T160041", "curr_value_format": "yaml", "edits": [{"value": ["/etc/origin/master/ca.crt"], "key": "kubernetesMasterConfig.controllerArguments.cluster-signing-cert-file"}, {"value": ["/etc/origin/master/ca.key"], "key": "kubernetesMasterConfig.controllerArguments.cluster-signing-key-file"}], "state": "present", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "present", "changed": false, "result": []}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T160041", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": [ { "key": "kubernetesMasterConfig.controllerArguments.cluster-signing-cert-file", "value": [ "/etc/origin/master/ca.crt" ] }, { "key": "kubernetesMasterConfig.controllerArguments.cluster-signing-key-file", "value": [ "/etc/origin/master/ca.key" ] } ], "index": null, "key": "", "separator": ".", "src": "/etc/origin/master/master-config.yaml", "state": "present", "update": false, "value": null, "value_type": "" } }, "result": [], "state": "present" } TASK [openshift_control_plane : Create temp directory for static pods] ****************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:15 Wednesday 09 January 2019 16:00:41 +0100 (0:00:00.393) 0:21:15.809 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:00:41.745485", "stdout": "/tmp/openshift-ansible-v0V2qh", "cmd": ["mktemp", "-d", "/tmp/openshift-ansible-XXXXXX"], "rc": 0, "start": "2019-01-09 16:00:41.742630", "stderr": "", "delta": "0:00:00.002855", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "mktemp -d /tmp/openshift-ansible-XXXXXX", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-ansible-XXXXXX" ], "delta": "0:00:00.002855", "end": "2019-01-09 16:00:41.745485", "invocation": { "module_args": { "_raw_params": "mktemp -d /tmp/openshift-ansible-XXXXXX", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 16:00:41.742630", "stderr": "", "stderr_lines": [], "stdout": "/tmp/openshift-ansible-v0V2qh", "stdout_lines": [ "/tmp/openshift-ansible-v0V2qh" ] } TASK [openshift_control_plane : Prepare master static pods] ***************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:20 Wednesday 09 January 2019 16:00:41 +0100 (0:00:00.274) 0:21:16.083 ***** ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547046041.92-269913273335972 `" && echo ansible-tmp-1547046041.92-269913273335972="` echo /root/.ansible/tmp/ansible-tmp-1547046041.92-269913273335972 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547046041.92-269913273335972=/root/.ansible/tmp/ansible-tmp-1547046041.92-269913273335972\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/tmp/openshift-ansible-v0V2qh", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "binary", "uid": 0, "exists": true, "attr_flags": "e", "woth": false, "isreg": false, "device_type": 0, "mtime": 1547046041.7449448, "block_size": 4096, "inode": 660254, "isgid": false, "size": 4096, "executable": true, "isuid": false, "readable": true, "version": "1807980370", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "mimetype": "inode/directory", "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/tmp/openshift-ansible-v0V2qh", "xusr": true, "atime": 1547046041.7449448, "isdir": true, "ctime": 1547046041.7449448, "isblk": false, "wgrp": false, "xgrp": false, "dev": 64769, "roth": false, "isfifo": false, "mode": "0700", "rusr": true, "attributes": ["extents"]}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/tmp/openshift-ansible-v0V2qh/apiserver.yaml", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\n', '') PUT /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/files/apiserver.yaml TO /root/.ansible/tmp/ansible-tmp-1547046041.92-269913273335972/source SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r '[sp-os-master01.os.ad.scanplus.de]' (0, 'sftp> put /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/files/apiserver.yaml /root/.ansible/tmp/ansible-tmp-1547046041.92-269913273335972/source\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1547046041.92-269913273335972/ /root/.ansible/tmp/ansible-tmp-1547046041.92-269913273335972/source && sleep 0'"'"'' (0, '', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"src": "/root/.ansible/tmp/ansible-tmp-1547046041.92-269913273335972/source", "changed": true, "group": "root", "uid": 0, "dest": "/tmp/openshift-ansible-v0V2qh/apiserver.yaml", "checksum": "fd71fb37b51fceaad5af622a8d2623ddfbe5b3ac", "md5sum": "9580cc8d800807d7e5dfa9c8b5b329a4", "owner": "root", "state": "file", "gid": 0, "secontext": "unconfined_u:object_r:admin_home_t:s0", "mode": "0600", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "apiserver.yaml", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "dest": "/tmp/openshift-ansible-v0V2qh/apiserver.yaml", "selevel": null, "regexp": null, "validate": null, "src": "/root/.ansible/tmp/ansible-tmp-1547046041.92-269913273335972/source", "checksum": "fd71fb37b51fceaad5af622a8d2623ddfbe5b3ac", "seuser": null, "delimiter": null, "mode": 384, "attributes": null, "backup": false}}, "size": 1649}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547046041.92-269913273335972/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=apiserver.yaml) => { "changed": true, "checksum": "fd71fb37b51fceaad5af622a8d2623ddfbe5b3ac", "dest": "/tmp/openshift-ansible-v0V2qh/apiserver.yaml", "diff": [], "gid": 0, "group": "root", "invocation": { "module_args": { "_original_basename": "apiserver.yaml", "attributes": null, "backup": false, "checksum": "fd71fb37b51fceaad5af622a8d2623ddfbe5b3ac", "content": null, "delimiter": null, "dest": "/tmp/openshift-ansible-v0V2qh/apiserver.yaml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": 384, "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/root/.ansible/tmp/ansible-tmp-1547046041.92-269913273335972/source", "unsafe_writes": null, "validate": null } }, "item": "apiserver.yaml", "md5sum": "9580cc8d800807d7e5dfa9c8b5b329a4", "mode": "0600", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 1649, "src": "/root/.ansible/tmp/ansible-tmp-1547046041.92-269913273335972/source", "state": "file", "uid": 0 } ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547046042.57-47653471651753 `" && echo ansible-tmp-1547046042.57-47653471651753="` echo /root/.ansible/tmp/ansible-tmp-1547046042.57-47653471651753 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547046042.57-47653471651753=/root/.ansible/tmp/ansible-tmp-1547046042.57-47653471651753\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/tmp/openshift-ansible-v0V2qh", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "binary", "uid": 0, "exists": true, "attr_flags": "e", "woth": false, "isreg": false, "device_type": 0, "mtime": 1547046042.5159597, "block_size": 4096, "inode": 660254, "isgid": false, "size": 4096, "executable": true, "isuid": false, "readable": true, "version": "1807980370", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "mimetype": "inode/directory", "blocks": 8, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/tmp/openshift-ansible-v0V2qh", "xusr": true, "atime": 1547046041.7449448, "isdir": true, "ctime": 1547046042.5159597, "isblk": false, "wgrp": false, "xgrp": false, "dev": 64769, "roth": false, "isfifo": false, "mode": "0700", "rusr": true, "attributes": ["extents"]}, "changed": false}\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/tmp/openshift-ansible-v0V2qh/controller.yaml", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\n', '') PUT /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/files/controller.yaml TO /root/.ansible/tmp/ansible-tmp-1547046042.57-47653471651753/source SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r '[sp-os-master01.os.ad.scanplus.de]' (0, 'sftp> put /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/files/controller.yaml /root/.ansible/tmp/ansible-tmp-1547046042.57-47653471651753/source\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1547046042.57-47653471651753/ /root/.ansible/tmp/ansible-tmp-1547046042.57-47653471651753/source && sleep 0'"'"'' (0, '', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"src": "/root/.ansible/tmp/ansible-tmp-1547046042.57-47653471651753/source", "changed": true, "group": "root", "uid": 0, "dest": "/tmp/openshift-ansible-v0V2qh/controller.yaml", "checksum": "9122a652bd3f4e88fdf75b9835b933f08d84f037", "md5sum": "20f866b46bbaf5d13b698a5625e9447a", "owner": "root", "state": "file", "gid": 0, "secontext": "unconfined_u:object_r:admin_home_t:s0", "mode": "0600", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "controller.yaml", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "dest": "/tmp/openshift-ansible-v0V2qh/controller.yaml", "selevel": null, "regexp": null, "validate": null, "src": "/root/.ansible/tmp/ansible-tmp-1547046042.57-47653471651753/source", "checksum": "9122a652bd3f4e88fdf75b9835b933f08d84f037", "seuser": null, "delimiter": null, "mode": 384, "attributes": null, "backup": false}}, "size": 1847}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547046042.57-47653471651753/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=controller.yaml) => { "changed": true, "checksum": "9122a652bd3f4e88fdf75b9835b933f08d84f037", "dest": "/tmp/openshift-ansible-v0V2qh/controller.yaml", "diff": [], "gid": 0, "group": "root", "invocation": { "module_args": { "_original_basename": "controller.yaml", "attributes": null, "backup": false, "checksum": "9122a652bd3f4e88fdf75b9835b933f08d84f037", "content": null, "delimiter": null, "dest": "/tmp/openshift-ansible-v0V2qh/controller.yaml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": 384, "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/root/.ansible/tmp/ansible-tmp-1547046042.57-47653471651753/source", "unsafe_writes": null, "validate": null } }, "item": "controller.yaml", "md5sum": "20f866b46bbaf5d13b698a5625e9447a", "mode": "0600", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 1847, "src": "/root/.ansible/tmp/ansible-tmp-1547046042.57-47653471651753/source", "state": "file", "uid": 0 } TASK [openshift_control_plane : Update master static pods] ****************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:29 Wednesday 09 January 2019 16:00:43 +0100 (0:00:01.375) 0:21:17.459 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "", "src": "/tmp/openshift-ansible-v0V2qh/apiserver.yaml", "backup": false, "update": false, "value": null, "backup_ext": ".20190109T160043", "curr_value_format": "yaml", "edits": [{"value": "registry.redhat.io/openshift3/ose-control-plane:v3.11", "key": "spec.containers[0].image"}], "state": "present", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "present", "changed": true, "result": [{"edit": {"kind": "Pod", "spec": {"priorityClassName": "system-node-critical", "restartPolicy": "Always", "hostNetwork": true, "containers": [{"livenessProbe": {"initialDelaySeconds": 45, "httpGet": {"path": "healthz", "scheme": "HTTPS", "port": 8443}, "timeoutSeconds": 10}, "securityContext": {"privileged": true}, "name": "api", "image": "registry.redhat.io/openshift3/ose-control-plane:v3.11", "args": ["#!/bin/bash\\nset -euo pipefail\\nif [[ -f /etc/origin/master/master.env ]]; then\\n set -o allexport\\n source /etc/origin/master/master.env\\nfi\\nexec openshift start master api --config=/etc/origin/master/master-config.yaml --loglevel=${DEBUG_LOGLEVEL:-2}\\n"], "volumeMounts": [{"mountPath": "/etc/origin/master/", "name": "master-config"}, {"mountPath": "/etc/origin/cloudprovider/", "name": "master-cloud-provider"}, {"mountPath": "/var/lib/origin/", "name": "master-data"}, {"mountPath": "/etc/pki", "name": "master-pki"}], "command": ["/bin/bash", "-c"], "readinessProbe": {"initialDelaySeconds": 10, "httpGet": {"path": "healthz/ready", "scheme": "HTTPS", "port": 8443}, "timeoutSeconds": 10}}], "volumes": [{"hostPath": {"path": "/etc/origin/master/"}, "name": "master-config"}, {"hostPath": {"path": "/etc/origin/cloudprovider"}, "name": "master-cloud-provider"}, {"hostPath": {"path": "/var/lib/origin"}, "name": "master-data"}, {"hostPath": {"path": "/etc/pki"}, "name": "master-pki"}]}, "apiVersion": "v1", "metadata": {"labels": {"openshift.io/control-plane": "true", "openshift.io/component": "api"}, "namespace": "kube-system", "name": "master-api", "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}, "key": "spec.containers[0].image"}]}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=apiserver.yaml) => { "changed": true, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T160043", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": [ { "key": "spec.containers[0].image", "value": "registry.redhat.io/openshift3/ose-control-plane:v3.11" } ], "index": null, "key": "", "separator": ".", "src": "/tmp/openshift-ansible-v0V2qh/apiserver.yaml", "state": "present", "update": false, "value": null, "value_type": "" } }, "item": "apiserver.yaml", "result": [ { "edit": { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "labels": { "openshift.io/component": "api", "openshift.io/control-plane": "true" }, "name": "master-api", "namespace": "kube-system" }, "spec": { "containers": [ { "args": [ "#!/bin/bash\nset -euo pipefail\nif [[ -f /etc/origin/master/master.env ]]; then\n set -o allexport\n source /etc/origin/master/master.env\nfi\nexec openshift start master api --config=/etc/origin/master/master-config.yaml --loglevel=${DEBUG_LOGLEVEL:-2}\n" ], "command": [ "/bin/bash", "-c" ], "image": "registry.redhat.io/openshift3/ose-control-plane:v3.11", "livenessProbe": { "httpGet": { "path": "healthz", "port": 8443, "scheme": "HTTPS" }, "initialDelaySeconds": 45, "timeoutSeconds": 10 }, "name": "api", "readinessProbe": { "httpGet": { "path": "healthz/ready", "port": 8443, "scheme": "HTTPS" }, "initialDelaySeconds": 10, "timeoutSeconds": 10 }, "securityContext": { "privileged": true }, "volumeMounts": [ { "mountPath": "/etc/origin/master/", "name": "master-config" }, { "mountPath": "/etc/origin/cloudprovider/", "name": "master-cloud-provider" }, { "mountPath": "/var/lib/origin/", "name": "master-data" }, { "mountPath": "/etc/pki", "name": "master-pki" } ] } ], "hostNetwork": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "volumes": [ { "hostPath": { "path": "/etc/origin/master/" }, "name": "master-config" }, { "hostPath": { "path": "/etc/origin/cloudprovider" }, "name": "master-cloud-provider" }, { "hostPath": { "path": "/var/lib/origin" }, "name": "master-data" }, { "hostPath": { "path": "/etc/pki" }, "name": "master-pki" } ] } }, "key": "spec.containers[0].image" } ], "state": "present" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "", "src": "/tmp/openshift-ansible-v0V2qh/controller.yaml", "backup": false, "update": false, "value": null, "backup_ext": ".20190109T160043", "curr_value_format": "yaml", "edits": [{"value": "registry.redhat.io/openshift3/ose-control-plane:v3.11", "key": "spec.containers[0].image"}], "state": "present", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "present", "changed": true, "result": [{"edit": {"kind": "Pod", "spec": {"priorityClassName": "system-node-critical", "restartPolicy": "Always", "hostNetwork": true, "containers": [{"livenessProbe": {"httpGet": {"path": "healthz", "scheme": "HTTPS", "port": 8444}}, "securityContext": {"privileged": true}, "name": "controllers", "image": "registry.redhat.io/openshift3/ose-control-plane:v3.11", "args": ["#!/bin/bash\\nset -euo pipefail\\nif [[ -f /etc/origin/master/master.env ]]; then\\n set -o allexport\\n source /etc/origin/master/master.env\\nfi\\nexec openshift start master controllers --config=/etc/origin/master/master-config.yaml --listen=https://0.0.0.0:8444 --loglevel=${DEBUG_LOGLEVEL:-2}\\n"], "volumeMounts": [{"mountPath": "/etc/origin/master/", "name": "master-config"}, {"mountPath": "/etc/origin/cloudprovider/", "name": "master-cloud-provider"}, {"mountPath": "/etc/containers/registries.d/", "name": "signature-import"}, {"mountPath": "/usr/libexec/kubernetes/kubelet-plugins", "mountPropagation": "HostToContainer", "name": "kubelet-plugins"}, {"mountPath": "/etc/pki", "name": "master-pki"}], "command": ["/bin/bash", "-c"]}], "volumes": [{"hostPath": {"path": "/etc/origin/master/"}, "name": "master-config"}, {"hostPath": {"path": "/etc/origin/cloudprovider"}, "name": "master-cloud-provider"}, {"hostPath": {"path": "/etc/containers/registries.d"}, "name": "signature-import"}, {"hostPath": {"path": "/usr/libexec/kubernetes/kubelet-plugins"}, "name": "kubelet-plugins"}, {"hostPath": {"path": "/etc/pki"}, "name": "master-pki"}]}, "apiVersion": "v1", "metadata": {"labels": {"openshift.io/control-plane": "true", "openshift.io/component": "controllers"}, "namespace": "kube-system", "name": "master-controllers", "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}, "key": "spec.containers[0].image"}]}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=controller.yaml) => { "changed": true, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T160043", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": [ { "key": "spec.containers[0].image", "value": "registry.redhat.io/openshift3/ose-control-plane:v3.11" } ], "index": null, "key": "", "separator": ".", "src": "/tmp/openshift-ansible-v0V2qh/controller.yaml", "state": "present", "update": false, "value": null, "value_type": "" } }, "item": "controller.yaml", "result": [ { "edit": { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "labels": { "openshift.io/component": "controllers", "openshift.io/control-plane": "true" }, "name": "master-controllers", "namespace": "kube-system" }, "spec": { "containers": [ { "args": [ "#!/bin/bash\nset -euo pipefail\nif [[ -f /etc/origin/master/master.env ]]; then\n set -o allexport\n source /etc/origin/master/master.env\nfi\nexec openshift start master controllers --config=/etc/origin/master/master-config.yaml --listen=https://0.0.0.0:8444 --loglevel=${DEBUG_LOGLEVEL:-2}\n" ], "command": [ "/bin/bash", "-c" ], "image": "registry.redhat.io/openshift3/ose-control-plane:v3.11", "livenessProbe": { "httpGet": { "path": "healthz", "port": 8444, "scheme": "HTTPS" } }, "name": "controllers", "securityContext": { "privileged": true }, "volumeMounts": [ { "mountPath": "/etc/origin/master/", "name": "master-config" }, { "mountPath": "/etc/origin/cloudprovider/", "name": "master-cloud-provider" }, { "mountPath": "/etc/containers/registries.d/", "name": "signature-import" }, { "mountPath": "/usr/libexec/kubernetes/kubelet-plugins", "mountPropagation": "HostToContainer", "name": "kubelet-plugins" }, { "mountPath": "/etc/pki", "name": "master-pki" } ] } ], "hostNetwork": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "volumes": [ { "hostPath": { "path": "/etc/origin/master/" }, "name": "master-config" }, { "hostPath": { "path": "/etc/origin/cloudprovider" }, "name": "master-cloud-provider" }, { "hostPath": { "path": "/etc/containers/registries.d" }, "name": "signature-import" }, { "hostPath": { "path": "/usr/libexec/kubernetes/kubelet-plugins" }, "name": "kubelet-plugins" }, { "hostPath": { "path": "/etc/pki" }, "name": "master-pki" } ] } }, "key": "spec.containers[0].image" } ], "state": "present" } TASK [openshift_control_plane : Update master static pod (api)] ************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:39 Wednesday 09 January 2019 16:00:43 +0100 (0:00:00.549) 0:21:18.009 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "", "src": "/tmp/openshift-ansible-v0V2qh/apiserver.yaml", "backup": false, "update": false, "value": null, "backup_ext": ".20190109T160043", "curr_value_format": "yaml", "edits": [{"value": "8443", "key": "spec.containers[0].livenessProbe.httpGet.port"}, {"value": "8443", "key": "spec.containers[0].readinessProbe.httpGet.port"}], "state": "present", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "present", "changed": false, "result": []}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T160043", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": [ { "key": "spec.containers[0].livenessProbe.httpGet.port", "value": "8443" }, { "key": "spec.containers[0].readinessProbe.httpGet.port", "value": "8443" } ], "index": null, "key": "", "separator": ".", "src": "/tmp/openshift-ansible-v0V2qh/apiserver.yaml", "state": "present", "update": false, "value": null, "value_type": "" } }, "result": [], "state": "present" } TASK [openshift_control_plane : ensure kubelet plugins dir exists] ********************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:48 Wednesday 09 January 2019 16:00:44 +0100 (0:00:00.320) 0:21:18.330 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : Update controller-manager static pod on atomic host] **************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:55 Wednesday 09 January 2019 16:00:44 +0100 (0:00:00.108) 0:21:18.438 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : ensure pod location exists] ***************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:65 Wednesday 09 January 2019 16:00:44 +0100 (0:00:00.113) 0:21:18.551 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"group": "root", "uid": 0, "changed": true, "owner": "root", "state": "directory", "gid": 0, "secontext": "unconfined_u:object_r:etc_t:s0", "mode": "0755", "path": "/etc/origin/node/pods/", "invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": null, "path": "/etc/origin/node/pods/", "owner": null, "follow": true, "group": null, "unsafe_writes": null, "state": "directory", "content": null, "serole": null, "setype": null, "selevel": null, "regexp": null, "src": null, "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": "0755", "attributes": null, "backup": null}}, "diff": {"after": {"path": "/etc/origin/node/pods/", "mode": "0755"}, "before": {"path": "/etc/origin/node/pods/", "mode": "0700"}}, "size": 4096}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "diff": { "after": { "mode": "0755", "path": "/etc/origin/node/pods/" }, "before": { "mode": "0700", "path": "/etc/origin/node/pods/" } }, "gid": 0, "group": "root", "invocation": { "module_args": { "_diff_peek": null, "_original_basename": null, "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "follow": true, "force": false, "group": null, "mode": "0755", "owner": null, "path": "/etc/origin/node/pods/", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "directory", "unsafe_writes": null } }, "mode": "0755", "owner": "root", "path": "/etc/origin/node/pods/", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 4096, "state": "directory", "uid": 0 } TASK [openshift_control_plane : Update master static pods] ****************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:71 Wednesday 09 January 2019 16:00:44 +0100 (0:00:00.279) 0:21:18.831 ***** ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547046044.67-218050157538601 `" && echo ansible-tmp-1547046044.67-218050157538601="` echo /root/.ansible/tmp/ansible-tmp-1547046044.67-218050157538601 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547046044.67-218050157538601=/root/.ansible/tmp/ansible-tmp-1547046044.67-218050157538601\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"src": "/tmp/openshift-ansible-v0V2qh/apiserver.yaml", "changed": false, "group": "root", "uid": 0, "dest": "/etc/origin/node/pods/apiserver.yaml", "checksum": "83d146b146147e281865733aed2ee899bc42cd45", "md5sum": "5ff58b23d7bc6219ad24ddd3a39d298d", "owner": "root", "state": "file", "gid": 0, "secontext": "system_u:object_r:etc_t:s0", "mode": "0600", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": true, "_original_basename": null, "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "dest": "/etc/origin/node/pods/apiserver.yaml", "selevel": null, "regexp": null, "validate": null, "src": "/tmp/openshift-ansible-v0V2qh/apiserver.yaml", "checksum": null, "seuser": null, "delimiter": null, "mode": 384, "attributes": null, "backup": false}}, "size": 1658}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => (item=apiserver.yaml) => { "changed": false, "checksum": "83d146b146147e281865733aed2ee899bc42cd45", "dest": "/etc/origin/node/pods/apiserver.yaml", "gid": 0, "group": "root", "invocation": { "module_args": { "_original_basename": null, "attributes": null, "backup": false, "checksum": null, "content": null, "delimiter": null, "dest": "/etc/origin/node/pods/apiserver.yaml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": 384, "owner": null, "regexp": null, "remote_src": true, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/tmp/openshift-ansible-v0V2qh/apiserver.yaml", "unsafe_writes": null, "validate": null } }, "item": "apiserver.yaml", "md5sum": "5ff58b23d7bc6219ad24ddd3a39d298d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1658, "src": "/tmp/openshift-ansible-v0V2qh/apiserver.yaml", "state": "file", "uid": 0 } ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547046044.89-132560508692614 `" && echo ansible-tmp-1547046044.89-132560508692614="` echo /root/.ansible/tmp/ansible-tmp-1547046044.89-132560508692614 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547046044.89-132560508692614=/root/.ansible/tmp/ansible-tmp-1547046044.89-132560508692614\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"src": "/tmp/openshift-ansible-v0V2qh/controller.yaml", "changed": false, "group": "root", "uid": 0, "dest": "/etc/origin/node/pods/controller.yaml", "checksum": "69744fb10825134cacb5d97c3fb8c65122c78e73", "md5sum": "147b12e0f403f3e13750157ab071079c", "owner": "root", "state": "file", "gid": 0, "secontext": "system_u:object_r:etc_t:s0", "mode": "0600", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": true, "_original_basename": null, "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "dest": "/etc/origin/node/pods/controller.yaml", "selevel": null, "regexp": null, "validate": null, "src": "/tmp/openshift-ansible-v0V2qh/controller.yaml", "checksum": null, "seuser": null, "delimiter": null, "mode": 384, "attributes": null, "backup": false}}, "size": 1759}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => (item=controller.yaml) => { "changed": false, "checksum": "69744fb10825134cacb5d97c3fb8c65122c78e73", "dest": "/etc/origin/node/pods/controller.yaml", "gid": 0, "group": "root", "invocation": { "module_args": { "_original_basename": null, "attributes": null, "backup": false, "checksum": null, "content": null, "delimiter": null, "dest": "/etc/origin/node/pods/controller.yaml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": 384, "owner": null, "regexp": null, "remote_src": true, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/tmp/openshift-ansible-v0V2qh/controller.yaml", "unsafe_writes": null, "validate": null } }, "item": "controller.yaml", "md5sum": "147b12e0f403f3e13750157ab071079c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1759, "src": "/tmp/openshift-ansible-v0V2qh/controller.yaml", "state": "file", "uid": 0 } TASK [openshift_control_plane : Remove temporary directory] ***************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:81 Wednesday 09 January 2019 16:00:45 +0100 (0:00:00.593) 0:21:19.424 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": null, "path": "/tmp/openshift-ansible-v0V2qh", "owner": null, "follow": true, "group": null, "unsafe_writes": null, "state": "absent", "content": null, "serole": null, "setype": null, "selevel": null, "regexp": null, "src": null, "name": "/tmp/openshift-ansible-v0V2qh", "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "path": "/tmp/openshift-ansible-v0V2qh", "state": "absent", "changed": true, "diff": {"after": {"path": "/tmp/openshift-ansible-v0V2qh", "state": "absent"}, "before": {"path": "/tmp/openshift-ansible-v0V2qh", "state": "directory"}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "diff": { "after": { "path": "/tmp/openshift-ansible-v0V2qh", "state": "absent" }, "before": { "path": "/tmp/openshift-ansible-v0V2qh", "state": "directory" } }, "invocation": { "module_args": { "_diff_peek": null, "_original_basename": null, "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "follow": true, "force": false, "group": null, "mode": null, "name": "/tmp/openshift-ansible-v0V2qh", "owner": null, "path": "/tmp/openshift-ansible-v0V2qh", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "absent", "unsafe_writes": null } }, "path": "/tmp/openshift-ansible-v0V2qh", "state": "absent" } TASK [Restart master system] ************************************************************************************************************************************************************************************************************************************************************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/tasks/restart_hosts.yml:2 Wednesday 09 January 2019 16:00:45 +0100 (0:00:00.273) 0:21:19.697 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Wait for master to restart] ******************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/tasks/restart_hosts.yml:10 Wednesday 09 January 2019 16:00:45 +0100 (0:00:00.113) 0:21:19.811 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Wait for master API to come back online] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/tasks/restart_hosts.yml:17 Wednesday 09 January 2019 16:00:45 +0100 (0:00:00.107) 0:21:19.918 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : restart master] ***************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/restart.yml:2 Wednesday 09 January 2019 16:00:45 +0100 (0:00:00.107) 0:21:20.025 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:00:46.763494", "stdout": "2", "cmd": ["/usr/local/bin/master-restart", "api"], "rc": 0, "start": "2019-01-09 16:00:45.966779", "stderr": "", "delta": "0:00:00.796715", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "/usr/local/bin/master-restart \\"api\\"", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=api) => { "attempts": 1, "changed": true, "cmd": [ "/usr/local/bin/master-restart", "api" ], "delta": "0:00:00.796715", "end": "2019-01-09 16:00:46.763494", "invocation": { "module_args": { "_raw_params": "/usr/local/bin/master-restart \"api\"", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "api", "rc": 0, "start": "2019-01-09 16:00:45.966779", "stderr": "", "stderr_lines": [], "stdout": "2", "stdout_lines": [ "2" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:00:47.786790", "stdout": "2", "cmd": ["/usr/local/bin/master-restart", "controllers"], "rc": 0, "start": "2019-01-09 16:00:46.926472", "stderr": "", "delta": "0:00:00.860318", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "/usr/local/bin/master-restart \\"controllers\\"", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=controllers) => { "attempts": 1, "changed": true, "cmd": [ "/usr/local/bin/master-restart", "controllers" ], "delta": "0:00:00.860318", "end": "2019-01-09 16:00:47.786790", "invocation": { "module_args": { "_raw_params": "/usr/local/bin/master-restart \"controllers\"", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "controllers", "rc": 0, "start": "2019-01-09 16:00:46.926472", "stderr": "", "stderr_lines": [], "stdout": "2", "stdout_lines": [ "2" ] } NOTIFIED HANDLER openshift_control_plane : verify API server for sp-os-master01.os.ad.scanplus.de TASK [debug] **************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:131 Wednesday 09 January 2019 16:00:47 +0100 (0:00:02.147) 0:21:22.173 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [include_tasks] ******************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:134 Wednesday 09 January 2019 16:00:48 +0100 (0:00:00.112) 0:21:22.286 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:137 Wednesday 09 January 2019 16:00:48 +0100 (0:00:00.115) 0:21:22.401 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "master_update_complete": true }, "changed": false } RUNNING HANDLER [openshift_control_plane : verify API server] *************************************************************************************************************************************************************************************************************************************************************** Wednesday 09 January 2019 16:00:48 +0100 (0:00:00.260) 0:21:22.662 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (1, '\n{"changed": true, "end": "2019-01-09 16:00:50.842722", "stdout": "", "cmd": ["curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready"], "failed": true, "delta": "0:00:02.007225", "stderr": "", "rc": 28, "invocation": {"module_args": {"warn": false, "executable": null, "_uses_shell": false, "_raw_params": "curl --silent --tlsv1.2 --max-time 2 --cacert /etc/origin/master/ca-bundle.crt https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}, "start": "2019-01-09 16:00:48.835497", "msg": "non-zero return code"}\n', '') FAILED - RETRYING: verify API server (120 retries left).Result was: { "attempts": 1, "changed": false, "cmd": [ "curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready" ], "delta": "0:00:02.007225", "end": "2019-01-09 16:00:50.842722", "invocation": { "module_args": { "_raw_params": "curl --silent --tlsv1.2 --max-time 2 --cacert /etc/origin/master/ca-bundle.crt https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": false } }, "msg": "non-zero return code", "rc": 28, "retries": 121, "start": "2019-01-09 16:00:48.835497", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (1, '\n{"changed": true, "end": "2019-01-09 16:00:54.016300", "stdout": "", "cmd": ["curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready"], "failed": true, "delta": "0:00:02.008292", "stderr": "", "rc": 28, "invocation": {"module_args": {"warn": false, "executable": null, "_uses_shell": false, "_raw_params": "curl --silent --tlsv1.2 --max-time 2 --cacert /etc/origin/master/ca-bundle.crt https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}, "start": "2019-01-09 16:00:52.008008", "msg": "non-zero return code"}\n', '') FAILED - RETRYING: verify API server (119 retries left).Result was: { "attempts": 2, "changed": false, "cmd": [ "curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready" ], "delta": "0:00:02.008292", "end": "2019-01-09 16:00:54.016300", "invocation": { "module_args": { "_raw_params": "curl --silent --tlsv1.2 --max-time 2 --cacert /etc/origin/master/ca-bundle.crt https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": false } }, "msg": "non-zero return code", "rc": 28, "retries": 121, "start": "2019-01-09 16:00:52.008008", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (1, '\n{"changed": true, "end": "2019-01-09 16:00:57.159777", "stdout": "", "cmd": ["curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready"], "failed": true, "delta": "0:00:02.007858", "stderr": "", "rc": 28, "invocation": {"module_args": {"warn": false, "executable": null, "_uses_shell": false, "_raw_params": "curl --silent --tlsv1.2 --max-time 2 --cacert /etc/origin/master/ca-bundle.crt https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}, "start": "2019-01-09 16:00:55.151919", "msg": "non-zero return code"}\n', '') FAILED - RETRYING: verify API server (118 retries left).Result was: { "attempts": 3, "changed": false, "cmd": [ "curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready" ], "delta": "0:00:02.007858", "end": "2019-01-09 16:00:57.159777", "invocation": { "module_args": { "_raw_params": "curl --silent --tlsv1.2 --max-time 2 --cacert /etc/origin/master/ca-bundle.crt https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": false } }, "msg": "non-zero return code", "rc": 28, "retries": 121, "start": "2019-01-09 16:00:55.151919", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (1, '\n{"changed": true, "end": "2019-01-09 16:01:00.307064", "stdout": "", "cmd": ["curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready"], "failed": true, "delta": "0:00:02.009231", "stderr": "", "rc": 28, "invocation": {"module_args": {"warn": false, "executable": null, "_uses_shell": false, "_raw_params": "curl --silent --tlsv1.2 --max-time 2 --cacert /etc/origin/master/ca-bundle.crt https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}, "start": "2019-01-09 16:00:58.297833", "msg": "non-zero return code"}\n', '') FAILED - RETRYING: verify API server (117 retries left).Result was: { "attempts": 4, "changed": false, "cmd": [ "curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready" ], "delta": "0:00:02.009231", "end": "2019-01-09 16:01:00.307064", "invocation": { "module_args": { "_raw_params": "curl --silent --tlsv1.2 --max-time 2 --cacert /etc/origin/master/ca-bundle.crt https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": false } }, "msg": "non-zero return code", "rc": 28, "retries": 121, "start": "2019-01-09 16:00:58.297833", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:01.797052", "stdout": "ok", "cmd": ["curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready"], "rc": 0, "start": "2019-01-09 16:01:01.633494", "stderr": "", "delta": "0:00:00.163558", "invocation": {"module_args": {"warn": false, "executable": null, "_uses_shell": false, "_raw_params": "curl --silent --tlsv1.2 --max-time 2 --cacert /etc/origin/master/ca-bundle.crt https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 5, "changed": false, "cmd": [ "curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready" ], "delta": "0:00:00.163558", "end": "2019-01-09 16:01:01.797052", "invocation": { "module_args": { "_raw_params": "curl --silent --tlsv1.2 --max-time 2 --cacert /etc/origin/master/ca-bundle.crt https://sp-os-master01.os.ad.scanplus.de:8443/healthz/ready", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": false } }, "rc": 0, "start": "2019-01-09 16:01:01.633494", "stderr": "", "stderr_lines": [], "stdout": "ok", "stdout_lines": [ "ok" ] } META: ran handlers META: ran handlers PLAY [Gate on master update] ************************************************************************************************************************************************************************************************************************************************************************************************ META: ran handlers TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:147 Wednesday 09 January 2019 16:01:01 +0100 (0:00:13.520) 0:21:36.183 ***** ok: [localhost] => { "ansible_facts": { "master_update_completed": [ "sp-os-master01.os.ad.scanplus.de" ] }, "changed": false } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:151 Wednesday 09 January 2019 16:01:02 +0100 (0:00:00.261) 0:21:36.444 ***** ok: [localhost] => { "ansible_facts": { "master_update_failed": [] }, "changed": false } TASK [fail] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:153 Wednesday 09 January 2019 16:01:02 +0100 (0:00:00.194) 0:21:36.638 ***** skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Reconcile Cluster Roles and Cluster Role Bindings and Security Context Constraints] *********************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [openshift_cli : Install clients] ************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_cli/tasks/main.yml:2 Wednesday 09 January 2019 16:01:02 +0100 (0:00:00.126) 0:21:36.765 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["atomic-openshift-clients-3.11*"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["atomic-openshift-clients-3.11.51-1.git.0.1560686.el7.x86_64 providing atomic-openshift-clients-3.11* is already installed"], "rc": 0}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "atomic-openshift-clients-3.11*" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "atomic-openshift-clients-3.11.51-1.git.0.1560686.el7.x86_64 providing atomic-openshift-clients-3.11* is already installed" ] } TASK [openshift_cli : Pull CLI Image (docker)] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_cli/tasks/main.yml:11 Wednesday 09 January 2019 16:01:27 +0100 (0:00:24.967) 0:22:01.732 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_cli : Pull CLI Image (atomic)] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_cli/tasks/main.yml:16 Wednesday 09 January 2019 16:01:27 +0100 (0:00:00.109) 0:22:01.841 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_cli : Copy client binaries/symlinks out of CLI image for use on the host] *********************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_cli/tasks/main.yml:24 Wednesday 09 January 2019 16:01:27 +0100 (0:00:00.136) 0:22:01.978 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_cli : Install bash completion for oc tools] ***************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_cli/tasks/main.yml:30 Wednesday 09 January 2019 16:01:27 +0100 (0:00:00.114) 0:22:02.093 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"msg": "", "invocation": {"module_args": {"allow_downgrade": false, "name": ["bash-completion"], "bugfix": false, "list": null, "disable_gpg_check": false, "conf_file": null, "install_repoquery": true, "validate_certs": true, "state": "present", "disablerepo": null, "update_cache": false, "disable_plugin": [], "enablerepo": null, "exclude": null, "security": false, "update_only": false, "enable_plugin": [], "installroot": "/", "skip_broken": false}}, "changed": false, "results": ["1:bash-completion-2.1-6.el7.noarch providing bash-completion is already installed"], "rc": 0}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "bugfix": false, "conf_file": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": null, "enable_plugin": [], "enablerepo": null, "exclude": null, "install_repoquery": true, "installroot": "/", "list": null, "name": [ "bash-completion" ], "security": false, "skip_broken": false, "state": "present", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "", "rc": 0, "results": [ "1:bash-completion-2.1-6.el7.noarch providing bash-completion is already installed" ] } TASK [openshift_control_plane : Wait for APIs to become available] ********************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:2 Wednesday 09 January 2019 16:01:41 +0100 (0:00:13.466) 0:22:15.560 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:41.693899", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"apps.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"deploymentconfigs\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"DeploymentConfig\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"],\\"shortNames\\":[\\"dc\\"],\\"categories\\":[\\"all\\"]},{\\"name\\":\\"deploymentconfigs/instantiate\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"DeploymentRequest\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"deploymentconfigs/log\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"DeploymentLog\\",\\"verbs\\":[\\"get\\"]},{\\"name\\":\\"deploymentconfigs/rollback\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"DeploymentConfigRollback\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"deploymentconfigs/scale\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"group\\":\\"extensions\\",\\"version\\":\\"v1beta1\\",\\"kind\\":\\"Scale\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"deploymentconfigs/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"DeploymentConfig\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/apps.openshift.io/v1"], "rc": 0, "start": "2019-01-09 16:01:41.515091", "stderr": "", "delta": "0:00:00.178808", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/apps.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=apps.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/apps.openshift.io/v1" ], "delta": "0:00:00.178808", "end": "2019-01-09 16:01:41.693899", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/apps.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "apps.openshift.io", "rc": 0, "start": "2019-01-09 16:01:41.515091", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"apps.openshift.io/v1\",\"resources\":[{\"name\":\"deploymentconfigs\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentConfig\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"dc\"],\"categories\":[\"all\"]},{\"name\":\"deploymentconfigs/instantiate\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentRequest\",\"verbs\":[\"create\"]},{\"name\":\"deploymentconfigs/log\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentLog\",\"verbs\":[\"get\"]},{\"name\":\"deploymentconfigs/rollback\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentConfigRollback\",\"verbs\":[\"create\"]},{\"name\":\"deploymentconfigs/scale\",\"singularName\":\"\",\"namespaced\":true,\"group\":\"extensions\",\"version\":\"v1beta1\",\"kind\":\"Scale\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"deploymentconfigs/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentConfig\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"apps.openshift.io/v1\",\"resources\":[{\"name\":\"deploymentconfigs\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentConfig\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"dc\"],\"categories\":[\"all\"]},{\"name\":\"deploymentconfigs/instantiate\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentRequest\",\"verbs\":[\"create\"]},{\"name\":\"deploymentconfigs/log\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentLog\",\"verbs\":[\"get\"]},{\"name\":\"deploymentconfigs/rollback\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentConfigRollback\",\"verbs\":[\"create\"]},{\"name\":\"deploymentconfigs/scale\",\"singularName\":\"\",\"namespaced\":true,\"group\":\"extensions\",\"version\":\"v1beta1\",\"kind\":\"Scale\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"deploymentconfigs/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"DeploymentConfig\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:42.068936", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"authorization.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"clusterrolebindings\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterRoleBinding\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"clusterroles\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterRole\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"localresourceaccessreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"LocalResourceAccessReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"localsubjectaccessreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"LocalSubjectAccessReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"resourceaccessreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ResourceAccessReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"rolebindingrestrictions\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"RoleBindingRestriction\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"rolebindings\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"RoleBinding\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"roles\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Role\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"selfsubjectrulesreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"SelfSubjectRulesReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"subjectaccessreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"SubjectAccessReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"subjectrulesreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"SubjectRulesReview\\",\\"verbs\\":[\\"create\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/authorization.openshift.io/v1"], "rc": 0, "start": "2019-01-09 16:01:41.868110", "stderr": "", "delta": "0:00:00.200826", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/authorization.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=authorization.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/authorization.openshift.io/v1" ], "delta": "0:00:00.200826", "end": "2019-01-09 16:01:42.068936", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/authorization.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "authorization.openshift.io", "rc": 0, "start": "2019-01-09 16:01:41.868110", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"authorization.openshift.io/v1\",\"resources\":[{\"name\":\"clusterrolebindings\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterRoleBinding\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"clusterroles\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterRole\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"localresourceaccessreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"LocalResourceAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"localsubjectaccessreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"LocalSubjectAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"resourceaccessreviews\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ResourceAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"rolebindingrestrictions\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"RoleBindingRestriction\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"rolebindings\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"RoleBinding\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"roles\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Role\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"selfsubjectrulesreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"SelfSubjectRulesReview\",\"verbs\":[\"create\"]},{\"name\":\"subjectaccessreviews\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"SubjectAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"subjectrulesreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"SubjectRulesReview\",\"verbs\":[\"create\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"authorization.openshift.io/v1\",\"resources\":[{\"name\":\"clusterrolebindings\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterRoleBinding\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"clusterroles\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterRole\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"localresourceaccessreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"LocalResourceAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"localsubjectaccessreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"LocalSubjectAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"resourceaccessreviews\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ResourceAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"rolebindingrestrictions\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"RoleBindingRestriction\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"rolebindings\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"RoleBinding\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"roles\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Role\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"]},{\"name\":\"selfsubjectrulesreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"SelfSubjectRulesReview\",\"verbs\":[\"create\"]},{\"name\":\"subjectaccessreviews\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"SubjectAccessReview\",\"verbs\":[\"create\"]},{\"name\":\"subjectrulesreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"SubjectRulesReview\",\"verbs\":[\"create\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:42.409913", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"build.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"buildconfigs\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"BuildConfig\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"],\\"shortNames\\":[\\"bc\\"],\\"categories\\":[\\"all\\"]},{\\"name\\":\\"buildconfigs/instantiate\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"BuildRequest\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"buildconfigs/instantiatebinary\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"BinaryBuildRequestOptions\\",\\"verbs\\":[]},{\\"name\\":\\"buildconfigs/webhooks\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Build\\",\\"verbs\\":[]},{\\"name\\":\\"builds\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Build\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"],\\"categories\\":[\\"all\\"]},{\\"name\\":\\"builds/clone\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"BuildRequest\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"builds/details\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Build\\",\\"verbs\\":[\\"update\\"]},{\\"name\\":\\"builds/log\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"BuildLog\\",\\"verbs\\":[\\"get\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/build.openshift.io/v1"], "rc": 0, "start": "2019-01-09 16:01:42.225120", "stderr": "", "delta": "0:00:00.184793", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/build.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=build.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/build.openshift.io/v1" ], "delta": "0:00:00.184793", "end": "2019-01-09 16:01:42.409913", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/build.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "build.openshift.io", "rc": 0, "start": "2019-01-09 16:01:42.225120", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"build.openshift.io/v1\",\"resources\":[{\"name\":\"buildconfigs\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildConfig\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"bc\"],\"categories\":[\"all\"]},{\"name\":\"buildconfigs/instantiate\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildRequest\",\"verbs\":[\"create\"]},{\"name\":\"buildconfigs/instantiatebinary\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BinaryBuildRequestOptions\",\"verbs\":[]},{\"name\":\"buildconfigs/webhooks\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Build\",\"verbs\":[]},{\"name\":\"builds\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Build\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"categories\":[\"all\"]},{\"name\":\"builds/clone\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildRequest\",\"verbs\":[\"create\"]},{\"name\":\"builds/details\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Build\",\"verbs\":[\"update\"]},{\"name\":\"builds/log\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildLog\",\"verbs\":[\"get\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"build.openshift.io/v1\",\"resources\":[{\"name\":\"buildconfigs\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildConfig\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"bc\"],\"categories\":[\"all\"]},{\"name\":\"buildconfigs/instantiate\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildRequest\",\"verbs\":[\"create\"]},{\"name\":\"buildconfigs/instantiatebinary\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BinaryBuildRequestOptions\",\"verbs\":[]},{\"name\":\"buildconfigs/webhooks\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Build\",\"verbs\":[]},{\"name\":\"builds\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Build\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"categories\":[\"all\"]},{\"name\":\"builds/clone\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildRequest\",\"verbs\":[\"create\"]},{\"name\":\"builds/details\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Build\",\"verbs\":[\"update\"]},{\"name\":\"builds/log\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"BuildLog\",\"verbs\":[\"get\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:42.756526", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"image.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"images\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"Image\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"imagesignatures\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ImageSignature\\",\\"verbs\\":[\\"create\\",\\"delete\\"]},{\\"name\\":\\"imagestreamimages\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ImageStreamImage\\",\\"verbs\\":[\\"get\\"],\\"shortNames\\":[\\"isimage\\"]},{\\"name\\":\\"imagestreamimports\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ImageStreamImport\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"imagestreammappings\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ImageStreamMapping\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"imagestreams\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ImageStream\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"],\\"shortNames\\":[\\"is\\"],\\"categories\\":[\\"all\\"]},{\\"name\\":\\"imagestreams/layers\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ImageStreamLayers\\",\\"verbs\\":[\\"get\\"]},{\\"name\\":\\"imagestreams/secrets\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"SecretList\\",\\"verbs\\":[\\"get\\"]},{\\"name\\":\\"imagestreams/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ImageStream\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"imagestreamtags\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ImageStreamTag\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\"],\\"shortNames\\":[\\"istag\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/image.openshift.io/v1"], "rc": 0, "start": "2019-01-09 16:01:42.558647", "stderr": "", "delta": "0:00:00.197879", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/image.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=image.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/image.openshift.io/v1" ], "delta": "0:00:00.197879", "end": "2019-01-09 16:01:42.756526", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/image.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "image.openshift.io", "rc": 0, "start": "2019-01-09 16:01:42.558647", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"image.openshift.io/v1\",\"resources\":[{\"name\":\"images\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Image\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"imagesignatures\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ImageSignature\",\"verbs\":[\"create\",\"delete\"]},{\"name\":\"imagestreamimages\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamImage\",\"verbs\":[\"get\"],\"shortNames\":[\"isimage\"]},{\"name\":\"imagestreamimports\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamImport\",\"verbs\":[\"create\"]},{\"name\":\"imagestreammappings\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamMapping\",\"verbs\":[\"create\"]},{\"name\":\"imagestreams\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStream\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"is\"],\"categories\":[\"all\"]},{\"name\":\"imagestreams/layers\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamLayers\",\"verbs\":[\"get\"]},{\"name\":\"imagestreams/secrets\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"SecretList\",\"verbs\":[\"get\"]},{\"name\":\"imagestreams/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStream\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"imagestreamtags\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamTag\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"],\"shortNames\":[\"istag\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"image.openshift.io/v1\",\"resources\":[{\"name\":\"images\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Image\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"imagesignatures\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ImageSignature\",\"verbs\":[\"create\",\"delete\"]},{\"name\":\"imagestreamimages\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamImage\",\"verbs\":[\"get\"],\"shortNames\":[\"isimage\"]},{\"name\":\"imagestreamimports\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamImport\",\"verbs\":[\"create\"]},{\"name\":\"imagestreammappings\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamMapping\",\"verbs\":[\"create\"]},{\"name\":\"imagestreams\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStream\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"is\"],\"categories\":[\"all\"]},{\"name\":\"imagestreams/layers\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamLayers\",\"verbs\":[\"get\"]},{\"name\":\"imagestreams/secrets\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"SecretList\",\"verbs\":[\"get\"]},{\"name\":\"imagestreams/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStream\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"imagestreamtags\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ImageStreamTag\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\"],\"shortNames\":[\"istag\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:43.111609", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"network.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"clusternetworks\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterNetwork\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"egressnetworkpolicies\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"EgressNetworkPolicy\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"hostsubnets\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"HostSubnet\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"netnamespaces\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"NetNamespace\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/network.openshift.io/v1"], "rc": 0, "start": "2019-01-09 16:01:42.908925", "stderr": "", "delta": "0:00:00.202684", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/network.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=network.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/network.openshift.io/v1" ], "delta": "0:00:00.202684", "end": "2019-01-09 16:01:43.111609", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/network.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "network.openshift.io", "rc": 0, "start": "2019-01-09 16:01:42.908925", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"network.openshift.io/v1\",\"resources\":[{\"name\":\"clusternetworks\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterNetwork\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"egressnetworkpolicies\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"EgressNetworkPolicy\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"hostsubnets\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"HostSubnet\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"netnamespaces\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"NetNamespace\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"network.openshift.io/v1\",\"resources\":[{\"name\":\"clusternetworks\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterNetwork\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"egressnetworkpolicies\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"EgressNetworkPolicy\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"hostsubnets\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"HostSubnet\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"netnamespaces\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"NetNamespace\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:43.446780", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"oauth.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"oauthaccesstokens\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"OAuthAccessToken\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"oauthauthorizetokens\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"OAuthAuthorizeToken\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"oauthclientauthorizations\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"OAuthClientAuthorization\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"oauthclients\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"OAuthClient\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/oauth.openshift.io/v1"], "rc": 0, "start": "2019-01-09 16:01:43.257228", "stderr": "", "delta": "0:00:00.189552", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/oauth.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=oauth.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/oauth.openshift.io/v1" ], "delta": "0:00:00.189552", "end": "2019-01-09 16:01:43.446780", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/oauth.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "oauth.openshift.io", "rc": 0, "start": "2019-01-09 16:01:43.257228", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"oauth.openshift.io/v1\",\"resources\":[{\"name\":\"oauthaccesstokens\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthAccessToken\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"oauthauthorizetokens\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthAuthorizeToken\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"oauthclientauthorizations\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthClientAuthorization\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"oauthclients\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthClient\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"oauth.openshift.io/v1\",\"resources\":[{\"name\":\"oauthaccesstokens\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthAccessToken\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"oauthauthorizetokens\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthAuthorizeToken\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"oauthclientauthorizations\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthClientAuthorization\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"oauthclients\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"OAuthClient\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:43.759425", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"project.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"projectrequests\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ProjectRequest\\",\\"verbs\\":[\\"create\\",\\"list\\"]},{\\"name\\":\\"projects\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"Project\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/project.openshift.io/v1"], "rc": 0, "start": "2019-01-09 16:01:43.589575", "stderr": "", "delta": "0:00:00.169850", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/project.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=project.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/project.openshift.io/v1" ], "delta": "0:00:00.169850", "end": "2019-01-09 16:01:43.759425", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/project.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "project.openshift.io", "rc": 0, "start": "2019-01-09 16:01:43.589575", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"project.openshift.io/v1\",\"resources\":[{\"name\":\"projectrequests\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ProjectRequest\",\"verbs\":[\"create\",\"list\"]},{\"name\":\"projects\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Project\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"project.openshift.io/v1\",\"resources\":[{\"name\":\"projectrequests\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ProjectRequest\",\"verbs\":[\"create\",\"list\"]},{\"name\":\"projects\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Project\",\"verbs\":[\"create\",\"delete\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:44.167610", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"quota.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"appliedclusterresourcequotas\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"AppliedClusterResourceQuota\\",\\"verbs\\":[\\"get\\",\\"list\\"]},{\\"name\\":\\"clusterresourcequotas\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterResourceQuota\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"],\\"shortNames\\":[\\"clusterquota\\"]},{\\"name\\":\\"clusterresourcequotas/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterResourceQuota\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/quota.openshift.io/v1"], "rc": 0, "start": "2019-01-09 16:01:43.917804", "stderr": "", "delta": "0:00:00.249806", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/quota.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=quota.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/quota.openshift.io/v1" ], "delta": "0:00:00.249806", "end": "2019-01-09 16:01:44.167610", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/quota.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "quota.openshift.io", "rc": 0, "start": "2019-01-09 16:01:43.917804", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"quota.openshift.io/v1\",\"resources\":[{\"name\":\"appliedclusterresourcequotas\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"AppliedClusterResourceQuota\",\"verbs\":[\"get\",\"list\"]},{\"name\":\"clusterresourcequotas\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterResourceQuota\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"clusterquota\"]},{\"name\":\"clusterresourcequotas/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterResourceQuota\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"quota.openshift.io/v1\",\"resources\":[{\"name\":\"appliedclusterresourcequotas\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"AppliedClusterResourceQuota\",\"verbs\":[\"get\",\"list\"]},{\"name\":\"clusterresourcequotas\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterResourceQuota\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"clusterquota\"]},{\"name\":\"clusterresourcequotas/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterResourceQuota\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:44.516414", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"route.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"routes\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Route\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"],\\"categories\\":[\\"all\\"]},{\\"name\\":\\"routes/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Route\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/route.openshift.io/v1"], "rc": 0, "start": "2019-01-09 16:01:44.334995", "stderr": "", "delta": "0:00:00.181419", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/route.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=route.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/route.openshift.io/v1" ], "delta": "0:00:00.181419", "end": "2019-01-09 16:01:44.516414", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/route.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "route.openshift.io", "rc": 0, "start": "2019-01-09 16:01:44.334995", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"route.openshift.io/v1\",\"resources\":[{\"name\":\"routes\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Route\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"categories\":[\"all\"]},{\"name\":\"routes/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Route\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"route.openshift.io/v1\",\"resources\":[{\"name\":\"routes\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Route\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"categories\":[\"all\"]},{\"name\":\"routes/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Route\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:44.835122", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"security.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"podsecuritypolicyreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"PodSecurityPolicyReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"podsecuritypolicyselfsubjectreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"PodSecurityPolicySelfSubjectReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"podsecuritypolicysubjectreviews\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"PodSecurityPolicySubjectReview\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"rangeallocations\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"RangeAllocation\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"securitycontextconstraints\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"SecurityContextConstraints\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"],\\"shortNames\\":[\\"scc\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/security.openshift.io/v1"], "rc": 0, "start": "2019-01-09 16:01:44.662476", "stderr": "", "delta": "0:00:00.172646", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/security.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=security.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/security.openshift.io/v1" ], "delta": "0:00:00.172646", "end": "2019-01-09 16:01:44.835122", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/security.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "security.openshift.io", "rc": 0, "start": "2019-01-09 16:01:44.662476", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"security.openshift.io/v1\",\"resources\":[{\"name\":\"podsecuritypolicyreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"PodSecurityPolicyReview\",\"verbs\":[\"create\"]},{\"name\":\"podsecuritypolicyselfsubjectreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"PodSecurityPolicySelfSubjectReview\",\"verbs\":[\"create\"]},{\"name\":\"podsecuritypolicysubjectreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"PodSecurityPolicySubjectReview\",\"verbs\":[\"create\"]},{\"name\":\"rangeallocations\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"RangeAllocation\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"securitycontextconstraints\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"SecurityContextConstraints\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"scc\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"security.openshift.io/v1\",\"resources\":[{\"name\":\"podsecuritypolicyreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"PodSecurityPolicyReview\",\"verbs\":[\"create\"]},{\"name\":\"podsecuritypolicyselfsubjectreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"PodSecurityPolicySelfSubjectReview\",\"verbs\":[\"create\"]},{\"name\":\"podsecuritypolicysubjectreviews\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"PodSecurityPolicySubjectReview\",\"verbs\":[\"create\"]},{\"name\":\"rangeallocations\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"RangeAllocation\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"securitycontextconstraints\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"SecurityContextConstraints\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"],\"shortNames\":[\"scc\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:45.156499", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"template.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"brokertemplateinstances\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"BrokerTemplateInstance\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"processedtemplates\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Template\\",\\"verbs\\":[\\"create\\"]},{\\"name\\":\\"templateinstances\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"TemplateInstance\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"templateinstances/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"TemplateInstance\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"templates\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"Template\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/template.openshift.io/v1"], "rc": 0, "start": "2019-01-09 16:01:44.979414", "stderr": "", "delta": "0:00:00.177085", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/template.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=template.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/template.openshift.io/v1" ], "delta": "0:00:00.177085", "end": "2019-01-09 16:01:45.156499", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/template.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "template.openshift.io", "rc": 0, "start": "2019-01-09 16:01:44.979414", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"template.openshift.io/v1\",\"resources\":[{\"name\":\"brokertemplateinstances\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"BrokerTemplateInstance\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"processedtemplates\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Template\",\"verbs\":[\"create\"]},{\"name\":\"templateinstances\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"TemplateInstance\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"templateinstances/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"TemplateInstance\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"templates\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Template\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"template.openshift.io/v1\",\"resources\":[{\"name\":\"brokertemplateinstances\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"BrokerTemplateInstance\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"processedtemplates\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Template\",\"verbs\":[\"create\"]},{\"name\":\"templateinstances\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"TemplateInstance\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"templateinstances/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"TemplateInstance\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"templates\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"Template\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}" ] } Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:45.513667", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"user.openshift.io/v1\\",\\"resources\\":[{\\"name\\":\\"groups\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"Group\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"identities\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"Identity\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"useridentitymappings\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"UserIdentityMapping\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"users\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"User\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/user.openshift.io/v1"], "rc": 0, "start": "2019-01-09 16:01:45.343372", "stderr": "", "delta": "0:00:00.170295", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/user.openshift.io/v1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=user.openshift.io) => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/user.openshift.io/v1" ], "delta": "0:00:00.170295", "end": "2019-01-09 16:01:45.513667", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/user.openshift.io/v1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": "user.openshift.io", "rc": 0, "start": "2019-01-09 16:01:45.343372", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"user.openshift.io/v1\",\"resources\":[{\"name\":\"groups\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Group\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"identities\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Identity\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"useridentitymappings\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"UserIdentityMapping\",\"verbs\":[\"create\",\"delete\",\"get\",\"patch\",\"update\"]},{\"name\":\"users\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"User\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"user.openshift.io/v1\",\"resources\":[{\"name\":\"groups\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Group\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"identities\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"Identity\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"useridentitymappings\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"UserIdentityMapping\",\"verbs\":[\"create\",\"delete\",\"get\",\"patch\",\"update\"]},{\"name\":\"users\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"User\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]}]}" ] } TASK [openshift_control_plane : Get API logs] ******************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:11 Wednesday 09 January 2019 16:01:45 +0100 (0:00:04.319) 0:22:19.879 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : debug] ************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:18 Wednesday 09 January 2019 16:01:45 +0100 (0:00:00.119) 0:22:19.999 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => {} TASK [openshift_control_plane : fail] *************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:22 Wednesday 09 January 2019 16:01:45 +0100 (0:00:00.115) 0:22:20.114 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : Check for apiservices/v1beta1.metrics.k8s.io registration] ********************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:28 Wednesday 09 January 2019 16:01:45 +0100 (0:00:00.109) 0:22:20.223 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (1, '\n{"changed": true, "end": "2019-01-09 16:01:46.484092", "stdout": "", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "apiservices/v1beta1.metrics.k8s.io"], "failed": true, "delta": "0:00:00.181354", "stderr": "No resources found.\\nError from server (NotFound): apiservices.apiregistration.k8s.io \\"v1beta1.metrics.k8s.io\\" not found", "rc": 1, "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get apiservices/v1beta1.metrics.k8s.io", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}, "start": "2019-01-09 16:01:46.302738", "msg": "non-zero return code"}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "apiservices/v1beta1.metrics.k8s.io" ], "delta": "0:00:00.181354", "end": "2019-01-09 16:01:46.484092", "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get apiservices/v1beta1.metrics.k8s.io", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "msg": "non-zero return code", "rc": 1, "start": "2019-01-09 16:01:46.302738", "stderr": "No resources found.\nError from server (NotFound): apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\" not found", "stderr_lines": [ "No resources found.", "Error from server (NotFound): apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\" not found" ], "stdout": "", "stdout_lines": [] } TASK [openshift_control_plane : Wait for /apis/metrics.k8s.io/v1beta1 when registered] ************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:37 Wednesday 09 January 2019 16:01:46 +0100 (0:00:00.596) 0:22:20.820 ***** skipping: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [openshift_control_plane : Check for apiservices/v1beta1.servicecatalog.k8s.io registration] *************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:46 Wednesday 09 January 2019 16:01:46 +0100 (0:00:00.110) 0:22:20.930 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:47.226651", "stdout": "NAME CREATED AT\\nv1beta1.servicecatalog.k8s.io 2018-01-31T13:26:12Z", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "apiservices/v1beta1.servicecatalog.k8s.io"], "rc": 0, "start": "2019-01-09 16:01:47.023576", "stderr": "", "delta": "0:00:00.203075", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get apiservices/v1beta1.servicecatalog.k8s.io", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "apiservices/v1beta1.servicecatalog.k8s.io" ], "delta": "0:00:00.203075", "end": "2019-01-09 16:01:47.226651", "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get apiservices/v1beta1.servicecatalog.k8s.io", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 16:01:47.023576", "stderr": "", "stderr_lines": [], "stdout": "NAME CREATED AT\nv1beta1.servicecatalog.k8s.io 2018-01-31T13:26:12Z", "stdout_lines": [ "NAME CREATED AT", "v1beta1.servicecatalog.k8s.io 2018-01-31T13:26:12Z" ] } TASK [openshift_control_plane : Wait for /apis/servicecatalog.k8s.io/v1beta1 when registered] ******************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/tasks/check_master_api_is_ready.yml:56 Wednesday 09 January 2019 16:01:47 +0100 (0:00:00.635) 0:22:21.566 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:47.720481", "stdout": "{\\"kind\\":\\"APIResourceList\\",\\"apiVersion\\":\\"v1\\",\\"groupVersion\\":\\"servicecatalog.k8s.io/v1beta1\\",\\"resources\\":[{\\"name\\":\\"clusterservicebrokers\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterServiceBroker\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"clusterservicebrokers/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterServiceBroker\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"clusterserviceclasses\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterServiceClass\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"clusterserviceclasses/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterServiceClass\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"clusterserviceplans\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterServicePlan\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"clusterserviceplans/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":false,\\"kind\\":\\"ClusterServicePlan\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"servicebindings\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ServiceBinding\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"servicebindings/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ServiceBinding\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"serviceinstances\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ServiceInstance\\",\\"verbs\\":[\\"create\\",\\"delete\\",\\"deletecollection\\",\\"get\\",\\"list\\",\\"patch\\",\\"update\\",\\"watch\\"]},{\\"name\\":\\"serviceinstances/reference\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ServiceInstance\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]},{\\"name\\":\\"serviceinstances/status\\",\\"singularName\\":\\"\\",\\"namespaced\\":true,\\"kind\\":\\"ServiceInstance\\",\\"verbs\\":[\\"get\\",\\"patch\\",\\"update\\"]}]}", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/servicecatalog.k8s.io/v1beta1"], "rc": 0, "start": "2019-01-09 16:01:47.512142", "stderr": "", "delta": "0:00:00.208339", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/servicecatalog.k8s.io/v1beta1", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "get", "--raw", "/apis/servicecatalog.k8s.io/v1beta1" ], "delta": "0:00:00.208339", "end": "2019-01-09 16:01:47.720481", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig get --raw /apis/servicecatalog.k8s.io/v1beta1", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 16:01:47.512142", "stderr": "", "stderr_lines": [], "stdout": "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"servicecatalog.k8s.io/v1beta1\",\"resources\":[{\"name\":\"clusterservicebrokers\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceBroker\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"clusterservicebrokers/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceBroker\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"clusterserviceclasses\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceClass\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"clusterserviceclasses/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceClass\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"clusterserviceplans\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServicePlan\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"clusterserviceplans/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServicePlan\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"servicebindings\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceBinding\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"servicebindings/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceBinding\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"serviceinstances\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceInstance\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"serviceinstances/reference\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceInstance\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"serviceinstances/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceInstance\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}", "stdout_lines": [ "{\"kind\":\"APIResourceList\",\"apiVersion\":\"v1\",\"groupVersion\":\"servicecatalog.k8s.io/v1beta1\",\"resources\":[{\"name\":\"clusterservicebrokers\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceBroker\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"clusterservicebrokers/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceBroker\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"clusterserviceclasses\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceClass\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"clusterserviceclasses/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServiceClass\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"clusterserviceplans\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServicePlan\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"clusterserviceplans/status\",\"singularName\":\"\",\"namespaced\":false,\"kind\":\"ClusterServicePlan\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"servicebindings\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceBinding\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"servicebindings/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceBinding\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"serviceinstances\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceInstance\",\"verbs\":[\"create\",\"delete\",\"deletecollection\",\"get\",\"list\",\"patch\",\"update\",\"watch\"]},{\"name\":\"serviceinstances/reference\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceInstance\",\"verbs\":[\"get\",\"patch\",\"update\"]},{\"name\":\"serviceinstances/status\",\"singularName\":\"\",\"namespaced\":true,\"kind\":\"ServiceInstance\",\"verbs\":[\"get\",\"patch\",\"update\"]}]}" ] } TASK [Reconcile Security Context Constraints] ******************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:173 Wednesday 09 January 2019 16:01:47 +0100 (0:00:00.500) 0:22:22.066 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:01:48.226914", "stdout": "", "cmd": ["oc", "adm", "policy", "--config=/etc/origin/master/admin.kubeconfig", "reconcile-sccs", "--confirm", "--additive-only=true", "-o", "name"], "rc": 0, "start": "2019-01-09 16:01:48.006835", "stderr": "", "delta": "0:00:00.220079", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc adm policy --config=/etc/origin/master/admin.kubeconfig reconcile-sccs --confirm --additive-only=true -o name", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "cmd": [ "oc", "adm", "policy", "--config=/etc/origin/master/admin.kubeconfig", "reconcile-sccs", "--confirm", "--additive-only=true", "-o", "name" ], "delta": "0:00:00.220079", "end": "2019-01-09 16:01:48.226914", "invocation": { "module_args": { "_raw_params": "oc adm policy --config=/etc/origin/master/admin.kubeconfig reconcile-sccs --confirm --additive-only=true -o name", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 16:01:48.006835", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } TASK [Migrate storage post policy reconciliation] *************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:182 Wednesday 09 January 2019 16:01:48 +0100 (0:00:00.501) 0:22:22.568 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:03:05.775371", "stdout": "summary: total=10355 errors=0 ignored=0 unchanged=10341 migrated=14", "cmd": ["oc", "adm", "--config=/etc/origin/master/admin.kubeconfig", "migrate", "storage", "--include=*"], "rc": 0, "start": "2019-01-09 16:01:48.509268", "stderr": "", "delta": "0:01:17.266103", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc adm --config=/etc/origin/master/admin.kubeconfig migrate storage --include=*", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "attempts": 1, "changed": true, "cmd": [ "oc", "adm", "--config=/etc/origin/master/admin.kubeconfig", "migrate", "storage", "--include=*" ], "delta": "0:01:17.266103", "end": "2019-01-09 16:03:05.775371", "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc adm --config=/etc/origin/master/admin.kubeconfig migrate storage --include=*", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 16:01:48.509268", "stderr": "", "stderr_lines": [], "stdout": "summary: total=10355 errors=0 ignored=0 unchanged=10341 migrated=14", "stdout_lines": [ "summary: total=10355 errors=0 ignored=0 unchanged=10341 migrated=14" ] } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:196 Wednesday 09 January 2019 16:03:05 +0100 (0:01:17.565) 0:23:40.133 ***** ok: [sp-os-master01.os.ad.scanplus.de] => { "ansible_facts": { "reconcile_complete": true }, "changed": false } META: ran handlers META: ran handlers PLAY [Gate on reconcile] **************************************************************************************************************************************************************************************************************************************************************************************************** META: ran handlers TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:206 Wednesday 09 January 2019 16:03:06 +0100 (0:00:00.147) 0:23:40.280 ***** ok: [localhost] => { "ansible_facts": { "reconcile_completed": [ "sp-os-master01.os.ad.scanplus.de" ] }, "changed": false } TASK [set_fact] ************************************************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:210 Wednesday 09 January 2019 16:03:06 +0100 (0:00:00.234) 0:23:40.515 ***** ok: [localhost] => { "ansible_facts": { "reconcile_failed": [] }, "changed": false } TASK [fail] ***************************************************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:212 Wednesday 09 January 2019 16:03:06 +0100 (0:00:00.136) 0:23:40.652 ***** skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Update sync DS] ******************************************************************************************************************************************************************************************************************************************************************************************************* META: ran handlers TASK [openshift_node_group : Ensure project exists] ************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_node_group/tasks/sync.yml:2 Wednesday 09 January 2019 16:03:06 +0100 (0:00:00.139) 0:23:40.791 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_project.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"admin_role": "admin", "display_name": null, "description": null, "admin": null, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "state": "present", "node_selector": [""], "debug": false, "name": "openshift-node"}}, "state": "present", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get namespace openshift-node -o json", "results": {"status": {"phase": "Active"}, "kind": "Namespace", "spec": {"finalizers": ["kubernetes"]}, "apiVersion": "v1", "metadata": {"name": "openshift-node", "resourceVersion": "93752659", "creationTimestamp": "2018-01-31T12:57:29Z", "annotations": {"openshift.io/sa.scc.supplemental-groups": "1000050000/10000", "openshift.io/node-selector": "", "openshift.io/sa.scc.mcs": "s0:c7,c4", "openshift.io/sa.scc.uid-range": "1000050000/10000"}, "selfLink": "/api/v1/namespaces/openshift-node", "uid": "4b608bd6-0686-11e8-b4c9-005056aa3492"}}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "admin": null, "admin_role": "admin", "debug": false, "description": null, "display_name": null, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "openshift-node", "node_selector": [ "" ], "state": "present" } }, "results": { "cmd": "/usr/bin/oc get namespace openshift-node -o json", "results": { "apiVersion": "v1", "kind": "Namespace", "metadata": { "annotations": { "openshift.io/node-selector": "", "openshift.io/sa.scc.mcs": "s0:c7,c4", "openshift.io/sa.scc.supplemental-groups": "1000050000/10000", "openshift.io/sa.scc.uid-range": "1000050000/10000" }, "creationTimestamp": "2018-01-31T12:57:29Z", "name": "openshift-node", "resourceVersion": "93752659", "selfLink": "/api/v1/namespaces/openshift-node", "uid": "4b608bd6-0686-11e8-b4c9-005056aa3492" }, "spec": { "finalizers": [ "kubernetes" ] }, "status": { "phase": "Active" } }, "returncode": 0 }, "state": "present" } TASK [openshift_node_group : Make temp directory for templates] ************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_node_group/tasks/sync.yml:9 Wednesday 09 January 2019 16:03:07 +0100 (0:00:00.550) 0:23:41.341 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:03:07.309300", "stdout": "/tmp/ansible-7L4eZ3", "cmd": ["mktemp", "-d", "/tmp/ansible-XXXXXX"], "rc": 0, "start": "2019-01-09 16:03:07.305885", "stderr": "", "delta": "0:00:00.003415", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "mktemp -d /tmp/ansible-XXXXXX", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/ansible-XXXXXX" ], "delta": "0:00:00.003415", "end": "2019-01-09 16:03:07.309300", "invocation": { "module_args": { "_raw_params": "mktemp -d /tmp/ansible-XXXXXX", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 16:03:07.305885", "stderr": "", "stderr_lines": [], "stdout": "/tmp/ansible-7L4eZ3", "stdout_lines": [ "/tmp/ansible-7L4eZ3" ] } TASK [openshift_node_group : Copy templates to temp directory] ************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_node_group/tasks/sync.yml:14 Wednesday 09 January 2019 16:03:07 +0100 (0:00:00.323) 0:23:41.665 ***** ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547046187.5-260239133023634 `" && echo ansible-tmp-1547046187.5-260239133023634="` echo /root/.ansible/tmp/ansible-tmp-1547046187.5-260239133023634 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547046187.5-260239133023634=/root/.ansible/tmp/ansible-tmp-1547046187.5-260239133023634\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/tmp/ansible-7L4eZ3/sync.yaml", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\n', '') PUT /usr/share/ansible/openshift-ansible/roles/openshift_node_group/files/sync.yaml TO /root/.ansible/tmp/ansible-tmp-1547046187.5-260239133023634/source SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r '[sp-os-master01.os.ad.scanplus.de]' (0, 'sftp> put /usr/share/ansible/openshift-ansible/roles/openshift_node_group/files/sync.yaml /root/.ansible/tmp/ansible-tmp-1547046187.5-260239133023634/source\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1547046187.5-260239133023634/ /root/.ansible/tmp/ansible-tmp-1547046187.5-260239133023634/source && sleep 0'"'"'' (0, '', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"src": "/root/.ansible/tmp/ansible-tmp-1547046187.5-260239133023634/source", "changed": true, "group": "root", "uid": 0, "dest": "/tmp/ansible-7L4eZ3/sync.yaml", "checksum": "0fde9e41a0a3af6881409430936871f9d357b061", "md5sum": "8cbdf680e0e24d6bb6a996b7719bce56", "owner": "root", "state": "file", "gid": 0, "secontext": "unconfined_u:object_r:admin_home_t:s0", "mode": "0644", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "sync.yaml", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "dest": "/tmp/ansible-7L4eZ3/sync.yaml", "selevel": null, "regexp": null, "validate": null, "src": "/root/.ansible/tmp/ansible-tmp-1547046187.5-260239133023634/source", "checksum": "0fde9e41a0a3af6881409430936871f9d357b061", "seuser": null, "delimiter": null, "mode": null, "attributes": null, "backup": false}}, "size": 7958}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547046187.5-260239133023634/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=/usr/share/ansible/openshift-ansible/roles/openshift_node_group/files/sync.yaml) => { "changed": true, "checksum": "0fde9e41a0a3af6881409430936871f9d357b061", "dest": "/tmp/ansible-7L4eZ3/sync.yaml", "diff": [], "gid": 0, "group": "root", "invocation": { "module_args": { "_original_basename": "sync.yaml", "attributes": null, "backup": false, "checksum": "0fde9e41a0a3af6881409430936871f9d357b061", "content": null, "delimiter": null, "dest": "/tmp/ansible-7L4eZ3/sync.yaml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": null, "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/root/.ansible/tmp/ansible-tmp-1547046187.5-260239133023634/source", "unsafe_writes": null, "validate": null } }, "item": "/usr/share/ansible/openshift-ansible/roles/openshift_node_group/files/sync.yaml", "md5sum": "8cbdf680e0e24d6bb6a996b7719bce56", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 7958, "src": "/root/.ansible/tmp/ansible-tmp-1547046187.5-260239133023634/source", "state": "file", "uid": 0 } ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547046188.0-181602426402204 `" && echo ansible-tmp-1547046188.0-181602426402204="` echo /root/.ansible/tmp/ansible-tmp-1547046188.0-181602426402204 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547046188.0-181602426402204=/root/.ansible/tmp/ansible-tmp-1547046188.0-181602426402204\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/tmp/ansible-7L4eZ3/sync-policy.yaml", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\n', '') PUT /usr/share/ansible/openshift-ansible/roles/openshift_node_group/files/sync-policy.yaml TO /root/.ansible/tmp/ansible-tmp-1547046188.0-181602426402204/source SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r '[sp-os-master01.os.ad.scanplus.de]' (0, 'sftp> put /usr/share/ansible/openshift-ansible/roles/openshift_node_group/files/sync-policy.yaml /root/.ansible/tmp/ansible-tmp-1547046188.0-181602426402204/source\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1547046188.0-181602426402204/ /root/.ansible/tmp/ansible-tmp-1547046188.0-181602426402204/source && sleep 0'"'"'' (0, '', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"src": "/root/.ansible/tmp/ansible-tmp-1547046188.0-181602426402204/source", "changed": true, "group": "root", "uid": 0, "dest": "/tmp/ansible-7L4eZ3/sync-policy.yaml", "checksum": "ef89af8a663d94066ec0a9dad9c2c607edb4dc28", "md5sum": "69f6b3bf64e2265f02927540cb3321ce", "owner": "root", "state": "file", "gid": 0, "secontext": "unconfined_u:object_r:admin_home_t:s0", "mode": "0644", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "sync-policy.yaml", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "dest": "/tmp/ansible-7L4eZ3/sync-policy.yaml", "selevel": null, "regexp": null, "validate": null, "src": "/root/.ansible/tmp/ansible-tmp-1547046188.0-181602426402204/source", "checksum": "ef89af8a663d94066ec0a9dad9c2c607edb4dc28", "seuser": null, "delimiter": null, "mode": null, "attributes": null, "backup": false}}, "size": 437}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547046188.0-181602426402204/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=/usr/share/ansible/openshift-ansible/roles/openshift_node_group/files/sync-policy.yaml) => { "changed": true, "checksum": "ef89af8a663d94066ec0a9dad9c2c607edb4dc28", "dest": "/tmp/ansible-7L4eZ3/sync-policy.yaml", "diff": [], "gid": 0, "group": "root", "invocation": { "module_args": { "_original_basename": "sync-policy.yaml", "attributes": null, "backup": false, "checksum": "ef89af8a663d94066ec0a9dad9c2c607edb4dc28", "content": null, "delimiter": null, "dest": "/tmp/ansible-7L4eZ3/sync-policy.yaml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": null, "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/root/.ansible/tmp/ansible-tmp-1547046188.0-181602426402204/source", "unsafe_writes": null, "validate": null } }, "item": "/usr/share/ansible/openshift-ansible/roles/openshift_node_group/files/sync-policy.yaml", "md5sum": "69f6b3bf64e2265f02927540cb3321ce", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 437, "src": "/root/.ansible/tmp/ansible-tmp-1547046188.0-181602426402204/source", "state": "file", "uid": 0 } ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1547046188.52-37036047182511 `" && echo ansible-tmp-1547046188.52-37036047182511="` echo /root/.ansible/tmp/ansible-tmp-1547046188.52-37036047182511 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1547046188.52-37036047182511=/root/.ansible/tmp/ansible-tmp-1547046188.52-37036047182511\n', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "checksum_algo": "sha1", "path": "/tmp/ansible-7L4eZ3/sync-images.yaml", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\n', '') PUT /usr/share/ansible/openshift-ansible/roles/openshift_node_group/files/sync-images.yaml TO /root/.ansible/tmp/ansible-tmp-1547046188.52-37036047182511/source SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r '[sp-os-master01.os.ad.scanplus.de]' (0, 'sftp> put /usr/share/ansible/openshift-ansible/roles/openshift_node_group/files/sync-images.yaml /root/.ansible/tmp/ansible-tmp-1547046188.52-37036047182511/source\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1547046188.52-37036047182511/ /root/.ansible/tmp/ansible-tmp-1547046188.52-37036047182511/source && sleep 0'"'"'' (0, '', '') Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"src": "/root/.ansible/tmp/ansible-tmp-1547046188.52-37036047182511/source", "changed": true, "group": "root", "uid": 0, "dest": "/tmp/ansible-7L4eZ3/sync-images.yaml", "checksum": "7365511dc5ed9a641e3e8340bc5429990130efca", "md5sum": "4e5f198417deff2bd840f47716fcd8d3", "owner": "root", "state": "file", "gid": 0, "secontext": "unconfined_u:object_r:admin_home_t:s0", "mode": "0644", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "sync-images.yaml", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": null, "setype": null, "content": null, "serole": null, "dest": "/tmp/ansible-7L4eZ3/sync-images.yaml", "selevel": null, "regexp": null, "validate": null, "src": "/root/.ansible/tmp/ansible-tmp-1547046188.52-37036047182511/source", "checksum": "7365511dc5ed9a641e3e8340bc5429990130efca", "seuser": null, "delimiter": null, "mode": null, "attributes": null, "backup": false}}, "size": 198}\n', '') ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1547046188.52-37036047182511/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') changed: [sp-os-master01.os.ad.scanplus.de] => (item=/usr/share/ansible/openshift-ansible/roles/openshift_node_group/files/sync-images.yaml) => { "changed": true, "checksum": "7365511dc5ed9a641e3e8340bc5429990130efca", "dest": "/tmp/ansible-7L4eZ3/sync-images.yaml", "diff": [], "gid": 0, "group": "root", "invocation": { "module_args": { "_original_basename": "sync-images.yaml", "attributes": null, "backup": false, "checksum": "7365511dc5ed9a641e3e8340bc5429990130efca", "content": null, "delimiter": null, "dest": "/tmp/ansible-7L4eZ3/sync-images.yaml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": null, "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/root/.ansible/tmp/ansible-tmp-1547046188.52-37036047182511/source", "unsafe_writes": null, "validate": null } }, "item": "/usr/share/ansible/openshift-ansible/roles/openshift_node_group/files/sync-images.yaml", "md5sum": "4e5f198417deff2bd840f47716fcd8d3", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 198, "src": "/root/.ansible/tmp/ansible-tmp-1547046188.52-37036047182511/source", "state": "file", "uid": 0 } TASK [openshift_node_group : Update the image tag] ************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_node_group/tasks/sync.yml:21 Wednesday 09 January 2019 16:03:09 +0100 (0:00:01.695) 0:23:43.361 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/yedit.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"index": null, "key": "tag.from.name", "src": "/tmp/ansible-7L4eZ3/sync-images.yaml", "backup": false, "update": false, "value": "registry.redhat.io/openshift3/ose-node:v3.11", "backup_ext": ".20190109T160309", "curr_value_format": "yaml", "edits": null, "state": "present", "value_type": "", "content_type": "yaml", "debug": false, "separator": ".", "content": null, "curr_value": null, "append": false}}, "state": "present", "changed": true, "result": [{"edit": {"kind": "ImageStreamTag", "tag": {"from": {"kind": "DockerImage", "name": "registry.redhat.io/openshift3/ose-node:v3.11"}, "reference": true}, "apiVersion": "image.openshift.io/v1", "metadata": {"namespace": "openshift-node", "name": "node:v3.11"}}, "key": "tag.from.name"}]}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "invocation": { "module_args": { "append": false, "backup": false, "backup_ext": ".20190109T160309", "content": null, "content_type": "yaml", "curr_value": null, "curr_value_format": "yaml", "debug": false, "edits": null, "index": null, "key": "tag.from.name", "separator": ".", "src": "/tmp/ansible-7L4eZ3/sync-images.yaml", "state": "present", "update": false, "value": "registry.redhat.io/openshift3/ose-node:v3.11", "value_type": "" } }, "result": [ { "edit": { "apiVersion": "image.openshift.io/v1", "kind": "ImageStreamTag", "metadata": { "name": "node:v3.11", "namespace": "openshift-node" }, "tag": { "from": { "kind": "DockerImage", "name": "registry.redhat.io/openshift3/ose-node:v3.11" }, "reference": true } }, "key": "tag.from.name" } ], "state": "present" } TASK [openshift_node_group : Ensure the service account can run privileged] ************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_node_group/tasks/sync.yml:27 Wednesday 09 January 2019 16:03:09 +0100 (0:00:00.381) 0:23:43.742 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_adm_policy_user.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"resource_name": "privileged", "rolebinding_name": null, "namespace": "openshift-node", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "resource_kind": "scc", "state": "present", "user": "system:serviceaccount:openshift-node:sync", "role_namespace": null, "debug": false}}, "changed": false, "present": "present"}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "debug": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "namespace": "openshift-node", "resource_kind": "scc", "resource_name": "privileged", "role_namespace": null, "rolebinding_name": null, "state": "present", "user": "system:serviceaccount:openshift-node:sync" } }, "present": "present" } TASK [openshift_node_group : Remove the image stream tag] ******************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_node_group/tasks/sync.yml:36 Wednesday 09 January 2019 16:03:10 +0100 (0:00:00.934) 0:23:44.677 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:03:10.894929", "stdout": "imagestreamtag.image.openshift.io \\"node:v3.11\\" deleted", "cmd": ["oc", "--config=/etc/origin/master/admin.kubeconfig", "delete", "-n", "openshift-node", "istag", "node:v3.11", "--ignore-not-found"], "rc": 0, "start": "2019-01-09 16:03:10.639038", "stderr": "", "delta": "0:00:00.255891", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig delete -n openshift-node istag node:v3.11 --ignore-not-found", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": [ "oc", "--config=/etc/origin/master/admin.kubeconfig", "delete", "-n", "openshift-node", "istag", "node:v3.11", "--ignore-not-found" ], "delta": "0:00:00.255891", "end": "2019-01-09 16:03:10.894929", "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig delete -n openshift-node istag node:v3.11 --ignore-not-found", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 16:03:10.639038", "stderr": "", "stderr_lines": [], "stdout": "imagestreamtag.image.openshift.io \"node:v3.11\" deleted", "stdout_lines": [ "imagestreamtag.image.openshift.io \"node:v3.11\" deleted" ] } TASK [openshift_node_group : Remove existing pods if present] *************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_node_group/tasks/sync.yml:47 Wednesday 09 January 2019 16:03:11 +0100 (0:00:00.571) 0:23:45.248 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "pods", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "absent", "debug": false, "selector": null, "name": "sync"}}, "state": "absent", "changed": false}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "pods", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "absent" } }, "state": "absent" } TASK [openshift_node_group : Apply the config] ****************************************************************************************************************************************************************************************************************************************************************************** task path: /usr/share/ansible/openshift-ansible/roles/openshift_node_group/tasks/sync.yml:55 Wednesday 09 January 2019 16:03:11 +0100 (0:00:00.571) 0:23:45.819 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"changed": true, "end": "2019-01-09 16:03:12.257657", "stdout": "imagestreamtag.image.openshift.io/node:v3.11 created\\nserviceaccount/sync unchanged\\nrolebinding.authorization.openshift.io/sync-node-config-reader-binding unchanged\\ndaemonset.apps/sync configured", "cmd": "oc --config=/etc/origin/master/admin.kubeconfig apply -f /tmp/ansible-7L4eZ3", "rc": 0, "start": "2019-01-09 16:03:11.789321", "stderr": "", "delta": "0:00:00.468336", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig apply -f /tmp/ansible-7L4eZ3", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\n', '') changed: [sp-os-master01.os.ad.scanplus.de] => { "changed": true, "cmd": "oc --config=/etc/origin/master/admin.kubeconfig apply -f /tmp/ansible-7L4eZ3", "delta": "0:00:00.468336", "end": "2019-01-09 16:03:12.257657", "invocation": { "module_args": { "_raw_params": "oc --config=/etc/origin/master/admin.kubeconfig apply -f /tmp/ansible-7L4eZ3", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "rc": 0, "start": "2019-01-09 16:03:11.789321", "stderr": "", "stderr_lines": [], "stdout": "imagestreamtag.image.openshift.io/node:v3.11 created\nserviceaccount/sync unchanged\nrolebinding.authorization.openshift.io/sync-node-config-reader-binding unchanged\ndaemonset.apps/sync configured", "stdout_lines": [ "imagestreamtag.image.openshift.io/node:v3.11 created", "serviceaccount/sync unchanged", "rolebinding.authorization.openshift.io/sync-node-config-reader-binding unchanged", "daemonset.apps/sync configured" ] } TASK [openshift_node_group : Remove temp directory] ************************************************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_node_group/tasks/sync.yml:59 Wednesday 09 January 2019 16:03:12 +0100 (0:00:00.839) 0:23:46.658 ***** Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"directory_mode": null, "force": false, "remote_src": null, "_original_basename": null, "path": "/tmp/ansible-7L4eZ3", "owner": null, "follow": true, "group": null, "unsafe_writes": null, "state": "absent", "content": null, "serole": null, "setype": null, "selevel": null, "regexp": null, "src": null, "name": "/tmp/ansible-7L4eZ3", "seuser": null, "recurse": false, "_diff_peek": null, "delimiter": null, "mode": null, "attributes": null, "backup": null}}, "path": "/tmp/ansible-7L4eZ3", "state": "absent", "changed": true, "diff": {"after": {"path": "/tmp/ansible-7L4eZ3", "state": "absent"}, "before": {"path": "/tmp/ansible-7L4eZ3", "state": "directory"}}}\n', '') ok: [sp-os-master01.os.ad.scanplus.de] => { "changed": false, "diff": { "after": { "path": "/tmp/ansible-7L4eZ3", "state": "absent" }, "before": { "path": "/tmp/ansible-7L4eZ3", "state": "directory" } }, "invocation": { "module_args": { "_diff_peek": null, "_original_basename": null, "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "follow": true, "force": false, "group": null, "mode": null, "name": "/tmp/ansible-7L4eZ3", "owner": null, "path": "/tmp/ansible-7L4eZ3", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "absent", "unsafe_writes": null } }, "path": "/tmp/ansible-7L4eZ3", "state": "absent" } TASK [openshift_node_group : Wait for the sync daemonset to become ready and available] ************************************************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/roles/openshift_node_group/tasks/sync.yml:65 Wednesday 09 January 2019 16:03:12 +0100 (0:00:00.317) 0:23:46.976 ***** Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 10, "observedGeneration": 16, "numberAvailable": 10, "desiredNumberScheduled": 15, "numberUnavailable": 5, "currentNumberScheduled": 15, "numberMisscheduled": 0, "updatedNumberScheduled": 14}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874609", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (60 retries left).Result was: { "attempts": 1, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874609", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 15, "desiredNumberScheduled": 15, "numberAvailable": 10, "numberMisscheduled": 0, "numberReady": 10, "numberUnavailable": 5, "observedGeneration": 16, "updatedNumberScheduled": 14 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (59 retries left).Result was: { "attempts": 2, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (58 retries left).Result was: { "attempts": 3, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (57 retries left).Result was: { "attempts": 4, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (56 retries left).Result was: { "attempts": 5, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (55 retries left).Result was: { "attempts": 6, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (54 retries left).Result was: { "attempts": 7, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (53 retries left).Result was: { "attempts": 8, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (52 retries left).Result was: { "attempts": 9, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (51 retries left).Result was: { "attempts": 10, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (50 retries left).Result was: { "attempts": 11, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (49 retries left).Result was: { "attempts": 12, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (48 retries left).Result was: { "attempts": 13, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (47 retries left).Result was: { "attempts": 14, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (46 retries left).Result was: { "attempts": 15, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (45 retries left).Result was: { "attempts": 16, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (44 retries left).Result was: { "attempts": 17, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (43 retries left).Result was: { "attempts": 18, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (42 retries left).Result was: { "attempts": 19, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (41 retries left).Result was: { "attempts": 20, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (40 retries left).Result was: { "attempts": 21, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (39 retries left).Result was: { "attempts": 22, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (38 retries left).Result was: { "attempts": 23, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (37 retries left).Result was: { "attempts": 24, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (36 retries left).Result was: { "attempts": 25, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (35 retries left).Result was: { "attempts": 26, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (34 retries left).Result was: { "attempts": 27, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (33 retries left).Result was: { "attempts": 28, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (32 retries left).Result was: { "attempts": 29, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (31 retries left).Result was: { "attempts": 30, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (30 retries left).Result was: { "attempts": 31, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (29 retries left).Result was: { "attempts": 32, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (28 retries left).Result was: { "attempts": 33, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (27 retries left).Result was: { "attempts": 34, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (26 retries left).Result was: { "attempts": 35, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (25 retries left).Result was: { "attempts": 36, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (24 retries left).Result was: { "attempts": 37, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (23 retries left).Result was: { "attempts": 38, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (22 retries left).Result was: { "attempts": 39, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (21 retries left).Result was: { "attempts": 40, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (20 retries left).Result was: { "attempts": 41, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (19 retries left).Result was: { "attempts": 42, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (18 retries left).Result was: { "attempts": 43, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (17 retries left).Result was: { "attempts": 44, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (16 retries left).Result was: { "attempts": 45, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (15 retries left).Result was: { "attempts": 46, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (14 retries left).Result was: { "attempts": 47, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 7, "observedGeneration": 16, "numberAvailable": 7, "desiredNumberScheduled": 15, "numberUnavailable": 8, "currentNumberScheduled": 9, "numberMisscheduled": 0, "updatedNumberScheduled": 8}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93874674", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (13 retries left).Result was: { "attempts": 48, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93874674", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 9, "desiredNumberScheduled": 15, "numberAvailable": 7, "numberMisscheduled": 0, "numberReady": 7, "numberUnavailable": 8, "observedGeneration": 16, "updatedNumberScheduled": 8 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 13, "observedGeneration": 16, "numberAvailable": 13, "desiredNumberScheduled": 15, "numberUnavailable": 2, "currentNumberScheduled": 15, "numberMisscheduled": 0, "updatedNumberScheduled": 14}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93877172", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (12 retries left).Result was: { "attempts": 49, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93877172", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 15, "desiredNumberScheduled": 15, "numberAvailable": 13, "numberMisscheduled": 0, "numberReady": 13, "numberUnavailable": 2, "observedGeneration": 16, "updatedNumberScheduled": 14 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 13, "observedGeneration": 16, "numberAvailable": 13, "desiredNumberScheduled": 15, "numberUnavailable": 2, "currentNumberScheduled": 15, "numberMisscheduled": 0, "updatedNumberScheduled": 14}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93877172", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (11 retries left).Result was: { "attempts": 50, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93877172", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 15, "desiredNumberScheduled": 15, "numberAvailable": 13, "numberMisscheduled": 0, "numberReady": 13, "numberUnavailable": 2, "observedGeneration": 16, "updatedNumberScheduled": 14 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 13, "observedGeneration": 16, "numberAvailable": 13, "desiredNumberScheduled": 15, "numberUnavailable": 2, "currentNumberScheduled": 15, "numberMisscheduled": 0, "updatedNumberScheduled": 14}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93877172", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (10 retries left).Result was: { "attempts": 51, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93877172", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 15, "desiredNumberScheduled": 15, "numberAvailable": 13, "numberMisscheduled": 0, "numberReady": 13, "numberUnavailable": 2, "observedGeneration": 16, "updatedNumberScheduled": 14 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 13, "observedGeneration": 16, "numberAvailable": 13, "desiredNumberScheduled": 15, "numberUnavailable": 2, "currentNumberScheduled": 15, "numberMisscheduled": 0, "updatedNumberScheduled": 14}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93877172", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (9 retries left).Result was: { "attempts": 52, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93877172", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 15, "desiredNumberScheduled": 15, "numberAvailable": 13, "numberMisscheduled": 0, "numberReady": 13, "numberUnavailable": 2, "observedGeneration": 16, "updatedNumberScheduled": 14 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 13, "observedGeneration": 16, "numberAvailable": 13, "desiredNumberScheduled": 15, "numberUnavailable": 2, "currentNumberScheduled": 15, "numberMisscheduled": 0, "updatedNumberScheduled": 14}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93877172", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (8 retries left).Result was: { "attempts": 53, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93877172", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 15, "desiredNumberScheduled": 15, "numberAvailable": 13, "numberMisscheduled": 0, "numberReady": 13, "numberUnavailable": 2, "observedGeneration": 16, "updatedNumberScheduled": 14 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 13, "observedGeneration": 16, "numberAvailable": 13, "desiredNumberScheduled": 15, "numberUnavailable": 2, "currentNumberScheduled": 15, "numberMisscheduled": 0, "updatedNumberScheduled": 14}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93877172", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (7 retries left).Result was: { "attempts": 54, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93877172", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 15, "desiredNumberScheduled": 15, "numberAvailable": 13, "numberMisscheduled": 0, "numberReady": 13, "numberUnavailable": 2, "observedGeneration": 16, "updatedNumberScheduled": 14 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 13, "observedGeneration": 16, "numberAvailable": 13, "desiredNumberScheduled": 15, "numberUnavailable": 2, "currentNumberScheduled": 15, "numberMisscheduled": 0, "updatedNumberScheduled": 14}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93877172", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (6 retries left).Result was: { "attempts": 55, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93877172", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 15, "desiredNumberScheduled": 15, "numberAvailable": 13, "numberMisscheduled": 0, "numberReady": 13, "numberUnavailable": 2, "observedGeneration": 16, "updatedNumberScheduled": 14 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 13, "observedGeneration": 16, "numberAvailable": 13, "desiredNumberScheduled": 15, "numberUnavailable": 2, "currentNumberScheduled": 15, "numberMisscheduled": 0, "updatedNumberScheduled": 14}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93877172", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (5 retries left).Result was: { "attempts": 56, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93877172", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 15, "desiredNumberScheduled": 15, "numberAvailable": 13, "numberMisscheduled": 0, "numberReady": 13, "numberUnavailable": 2, "observedGeneration": 16, "updatedNumberScheduled": 14 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 13, "observedGeneration": 16, "numberAvailable": 13, "desiredNumberScheduled": 15, "numberUnavailable": 2, "currentNumberScheduled": 15, "numberMisscheduled": 0, "updatedNumberScheduled": 14}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93877172", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (4 retries left).Result was: { "attempts": 57, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93877172", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 15, "desiredNumberScheduled": 15, "numberAvailable": 13, "numberMisscheduled": 0, "numberReady": 13, "numberUnavailable": 2, "observedGeneration": 16, "updatedNumberScheduled": 14 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 13, "observedGeneration": 16, "numberAvailable": 13, "desiredNumberScheduled": 15, "numberUnavailable": 2, "currentNumberScheduled": 15, "numberMisscheduled": 0, "updatedNumberScheduled": 14}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93877172", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (3 retries left).Result was: { "attempts": 58, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93877172", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 15, "desiredNumberScheduled": 15, "numberAvailable": 13, "numberMisscheduled": 0, "numberReady": 13, "numberUnavailable": 2, "observedGeneration": 16, "updatedNumberScheduled": 14 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 13, "observedGeneration": 16, "numberAvailable": 13, "desiredNumberScheduled": 15, "numberUnavailable": 2, "currentNumberScheduled": 15, "numberMisscheduled": 0, "updatedNumberScheduled": 14}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93877172", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (2 retries left).Result was: { "attempts": 59, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93877172", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 15, "desiredNumberScheduled": 15, "numberAvailable": 13, "numberMisscheduled": 0, "numberReady": 13, "numberUnavailable": 2, "observedGeneration": 16, "updatedNumberScheduled": 14 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 13, "observedGeneration": 16, "numberAvailable": 13, "desiredNumberScheduled": 15, "numberUnavailable": 2, "currentNumberScheduled": 15, "numberMisscheduled": 0, "updatedNumberScheduled": 14}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93877172", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') FAILED - RETRYING: Wait for the sync daemonset to become ready and available (1 retries left).Result was: { "attempts": 60, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93877172", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 15, "desiredNumberScheduled": 15, "numberAvailable": 13, "numberMisscheduled": 0, "numberReady": 13, "numberUnavailable": 2, "observedGeneration": 16, "updatedNumberScheduled": 14 } } ], "returncode": 0 }, "retries": 61, "state": "list" } Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_obj.py ESTABLISH SSH CONNECTION FOR USER: root SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r sp-os-master01.os.ad.scanplus.de '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' (0, '\n{"invocation": {"module_args": {"files": null, "kind": "daemonset", "force": false, "all_namespaces": null, "field_selector": null, "namespace": "openshift-node", "delete_after": false, "kubeconfig": "/etc/origin/master/admin.kubeconfig", "content": null, "state": "list", "debug": false, "selector": null, "name": "sync"}}, "state": "list", "changed": false, "results": {"returncode": 0, "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [{"status": {"numberReady": 13, "observedGeneration": 16, "numberAvailable": 13, "desiredNumberScheduled": 15, "numberUnavailable": 2, "currentNumberScheduled": 15, "numberMisscheduled": 0, "updatedNumberScheduled": 14}, "kind": "DaemonSet", "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "sync"}}, "templateGeneration": 16, "updateStrategy": {"rollingUpdate": {"maxUnavailable": "50%"}, "type": "RollingUpdate"}, "template": {"spec": {"priorityClassName": "system-node-critical", "dnsPolicy": "ClusterFirst", "securityContext": {}, "serviceAccountName": "sync", "schedulerName": "default-scheduler", "hostNetwork": true, "serviceAccount": "sync", "terminationGracePeriodSeconds": 1, "restartPolicy": "Always", "hostPID": true, "volumes": [{"hostPath": {"path": "/etc/origin/node", "type": ""}, "name": "host-config"}, {"hostPath": {"path": "/etc/sysconfig", "type": ""}, "name": "host-sysconfig-node"}, {"hostPath": {"path": "/var/run/dbus", "type": ""}, "name": "var-run-dbus"}, {"hostPath": {"path": "/run/systemd/system", "type": ""}, "name": "run-systemd-system"}], "tolerations": [{"operator": "Exists"}], "containers": [{"securityContext": {"privileged": true, "runAsUser": 0}, "name": "sync", "image": "registry.redhat.io/openshift3/ose-node:v3.11", "volumeMounts": [{"mountPath": "/etc/origin/node/", "name": "host-config"}, {"readOnly": true, "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node"}, {"readOnly": true, "mountPath": "/var/run/dbus", "name": "var-run-dbus"}, {"readOnly": true, "mountPath": "/run/systemd/system", "name": "run-systemd-system"}], "terminationMessagePolicy": "File", "command": ["/bin/bash", "-c", "#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap \'kill $(jobs -p); exit 0\' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\"info: Waiting for the node sysconfig file to be created\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n name=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"\\n if [[ -z \\"${name}\\" ]]; then\\n echo \\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\" 2>&1\\n sleep 15 & wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p\' \\"${file}\\" | head -1)\\"; then\\n echo \\"error: Unable to check for bootstrap config, exiting\\" 2>&1\\n kill $pid\\n exit 1\\n fi\\n if [[ \\"${updated}\\" != \\"${name}\\" ]]; then\\n echo \\"info: Bootstrap configuration profile name changed, exiting\\" 2>&1\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) &\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\"configmaps/${name}\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\" > /dev/null; then\\n echo \\"error: Unable to retrieve latest config for node\\" 2>&1\\n sleep 15 &\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\"$KUBELET_HOSTNAME_OVERRIDE\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\" >> /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null > /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\\n if [[ \\"$( cat /tmp/.old )\\" != \\"$( cat /tmp/.new )\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\"info: Configuration changed, restarting kubelet\\" 2>&1\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\"; then\\n labels=$(tr \' \' \'\\\\n\' <<<$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\n\' \' \')\\n if [[ -n \\"${labels}\\" ]]; then\\n echo \\"info: Applying node labels $labels\\" 2>&1\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" ${labels} --overwrite; then\\n echo \\"error: Unable to apply labels, will retry in 10\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\"error: The downloaded node configuration is invalid, retrying later\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\n echo \\"error: Unable to restart Kubelet\\" 2>&1\\n sleep 10 &\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\"node/${NODE_NAME}\\" \\\\\\n node.openshift.io/md5sum=\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 &\\n wait $!\\ndone\\n"], "env": [{"valueFrom": {"fieldRef": {"fieldPath": "spec.nodeName", "apiVersion": "v1"}}, "name": "NODE_NAME"}], "imagePullPolicy": "IfNotPresent", "terminationMessagePath": "/dev/termination-log", "resources": {}}]}, "metadata": {"labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "creationTimestamp": null, "annotations": {"scheduler.alpha.kubernetes.io/critical-pod": ""}}}}, "apiVersion": "extensions/v1beta1", "metadata": {"name": "sync", "generation": 16, "labels": {"component": "network", "app": "sync", "openshift.io/component": "sync", "type": "infra"}, "namespace": "openshift-node", "resourceVersion": "93877172", "creationTimestamp": "2018-09-13T19:03:37Z", "annotations": {"image.openshift.io/triggers": "[\\n {\\"from\\":{\\"kind\\":\\"ImageStreamTag\\",\\"name\\":\\"node:v3.11\\"},\\"fieldPath\\":\\"spec.template.spec.containers[?(@.name==\\\\\\"sync\\\\\\")].image\\"}\\n]\\n", "kubectl.kubernetes.io/last-applied-configuration": "{\\"apiVersion\\":\\"apps/v1\\",\\"kind\\":\\"DaemonSet\\",\\"metadata\\":{\\"annotations\\":{\\"image.openshift.io/triggers\\":\\"[\\\\n {\\\\\\"from\\\\\\":{\\\\\\"kind\\\\\\":\\\\\\"ImageStreamTag\\\\\\",\\\\\\"name\\\\\\":\\\\\\"node:v3.11\\\\\\"},\\\\\\"fieldPath\\\\\\":\\\\\\"spec.template.spec.containers[?(@.name==\\\\\\\\\\\\\\"sync\\\\\\\\\\\\\\")].image\\\\\\"}\\\\n]\\\\n\\",\\"kubernetes.io/description\\":\\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\\\n\\"},\\"name\\":\\"sync\\",\\"namespace\\":\\"openshift-node\\"},\\"spec\\":{\\"selector\\":{\\"matchLabels\\":{\\"app\\":\\"sync\\"}},\\"template\\":{\\"metadata\\":{\\"annotations\\":{\\"scheduler.alpha.kubernetes.io/critical-pod\\":\\"\\"},\\"labels\\":{\\"app\\":\\"sync\\",\\"component\\":\\"network\\",\\"openshift.io/component\\":\\"sync\\",\\"type\\":\\"infra\\"}},\\"spec\\":{\\"containers\\":[{\\"command\\":[\\"/bin/bash\\",\\"-c\\",\\"#!/bin/bash\\\\nset -euo pipefail\\\\n\\\\n# set by the node image\\\\nunset KUBECONFIG\\\\n\\\\ntrap \'kill $(jobs -p); exit 0\' TERM\\\\n\\\\n# track the current state of the config\\\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\\\n md5sum /etc/origin/node/node-config.yaml \\\\u003e /tmp/.old\\\\nelse\\\\n touch /tmp/.old\\\\nfi\\\\n\\\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\\\nwhile true; do\\\\n file=/etc/sysconfig/origin-node\\\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\\\n file=/etc/sysconfig/atomic-openshift-node\\\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\\\n file=/etc/sysconfig/origin-node\\\\n else\\\\n echo \\\\\\"info: Waiting for the node sysconfig file to be created\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n name=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"\\\\n if [[ -z \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026 wait\\\\n continue\\\\n fi\\\\n # in the background check to see if the value changes and exit if so\\\\n pid=$BASHPID\\\\n (\\\\n while true; do\\\\n if ! updated=\\\\\\"$(sed -nE \'s|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\\\\\1|p\' \\\\\\"${file}\\\\\\" | head -1)\\\\\\"; then\\\\n echo \\\\\\"error: Unable to check for bootstrap config, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 1\\\\n fi\\\\n if [[ \\\\\\"${updated}\\\\\\" != \\\\\\"${name}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Bootstrap configuration profile name changed, exiting\\\\\\" 2\\\\u003e\\\\u00261\\\\n kill $pid\\\\n exit 0\\\\n fi\\\\n sleep 15\\\\n done\\\\n ) \\\\u0026\\\\n break\\\\ndone\\\\nmkdir -p /etc/origin/node/tmp\\\\n# periodically refresh both node-config.yaml and relabel the node\\\\nwhile true; do\\\\n if ! oc extract \\\\\\"configmaps/${name}\\\\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\\\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\\\\" \\\\u003e /dev/null; then\\\\n echo \\\\\\"error: Unable to retrieve latest config for node\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 15 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n\\\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\\\n if ! [[ -z \\\\\\"$KUBELET_HOSTNAME_OVERRIDE\\\\\\" ]]; then\\\\n #Patching node-config for hostname override\\\\n echo \\\\\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\\\\" \\\\u003e\\\\u003e /etc/origin/node/tmp/node-config.yaml\\\\n fi\\\\n\\\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\\\n cat /dev/null \\\\u003e /tmp/.old\\\\n fi\\\\n\\\\n md5sum /etc/origin/node/tmp/node-config.yaml \\\\u003e /tmp/.new\\\\n if [[ \\\\\\"$( cat /tmp/.old )\\\\\\" != \\\\\\"$( cat /tmp/.new )\\\\\\" ]]; then\\\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\\\n echo \\\\\\"info: Configuration changed, restarting kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n # TODO: kubelet doesn\'t relabel nodes, best effort for now\\\\n # https://github.com/kubernetes/kubernetes/issues/59314\\\\n if args=\\\\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\\\\"; then\\\\n labels=$(tr \' \' \'\\\\\\\\n\' \\\\u003c\\\\u003c\\\\u003c$args | sed -ne \'/^--node-labels=/ { s/^--node-labels=//; p; }\' | tr \',\\\\\\\\n\' \' \')\\\\n if [[ -n \\\\\\"${labels}\\\\\\" ]]; then\\\\n echo \\\\\\"info: Applying node labels $labels\\\\\\" 2\\\\u003e\\\\u00261\\\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" ${labels} --overwrite; then\\\\n echo \\\\\\"error: Unable to apply labels, will retry in 10\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n else\\\\n echo \\\\\\"error: The downloaded node configuration is invalid, retrying later\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n if ! pkill -U 0 -f \'(^|/)hyperkube kubelet \'; then\\\\n echo \\\\\\"error: Unable to restart Kubelet\\\\\\" 2\\\\u003e\\\\u00261\\\\n sleep 10 \\\\u0026\\\\n wait $!\\\\n continue\\\\n fi\\\\n fi\\\\n # annotate node with md5sum of the config\\\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\\\\"node/${NODE_NAME}\\\\\\" \\\\\\\\\\\\n node.openshift.io/md5sum=\\\\\\"$( cat /tmp/.new | cut -d\' \' -f1 )\\\\\\" --overwrite\\\\n cp -f /tmp/.new /tmp/.old\\\\n sleep 180 \\\\u0026\\\\n wait $!\\\\ndone\\\\n\\"],\\"env\\":[{\\"name\\":\\"NODE_NAME\\",\\"valueFrom\\":{\\"fieldRef\\":{\\"fieldPath\\":\\"spec.nodeName\\"}}}],\\"image\\":\\" \\",\\"name\\":\\"sync\\",\\"securityContext\\":{\\"privileged\\":true,\\"runAsUser\\":0},\\"volumeMounts\\":[{\\"mountPath\\":\\"/etc/origin/node/\\",\\"name\\":\\"host-config\\"},{\\"mountPath\\":\\"/etc/sysconfig\\",\\"name\\":\\"host-sysconfig-node\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/var/run/dbus\\",\\"name\\":\\"var-run-dbus\\",\\"readOnly\\":true},{\\"mountPath\\":\\"/run/systemd/system\\",\\"name\\":\\"run-systemd-system\\",\\"readOnly\\":true}]}],\\"hostNetwork\\":true,\\"hostPID\\":true,\\"priorityClassName\\":\\"system-node-critical\\",\\"serviceAccountName\\":\\"sync\\",\\"terminationGracePeriodSeconds\\":1,\\"tolerations\\":[{\\"operator\\":\\"Exists\\"}],\\"volumes\\":[{\\"hostPath\\":{\\"path\\":\\"/etc/origin/node\\"},\\"name\\":\\"host-config\\"},{\\"hostPath\\":{\\"path\\":\\"/etc/sysconfig\\"},\\"name\\":\\"host-sysconfig-node\\"},{\\"hostPath\\":{\\"path\\":\\"/var/run/dbus\\"},\\"name\\":\\"var-run-dbus\\"},{\\"hostPath\\":{\\"path\\":\\"/run/systemd/system\\"},\\"name\\":\\"run-systemd-system\\"}]}},\\"updateStrategy\\":{\\"rollingUpdate\\":{\\"maxUnavailable\\":\\"50%\\"},\\"type\\":\\"RollingUpdate\\"}}}\\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n"}, "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492"}}]}}\n', '') fatal: [sp-os-master01.os.ad.scanplus.de]: FAILED! => { "attempts": 60, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "daemonset", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": "sync", "namespace": "openshift-node", "selector": null, "state": "list" } }, "results": { "cmd": "/usr/bin/oc get daemonset sync -o json -n openshift-node", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "annotations": { "image.openshift.io/triggers": "[\n {\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"node:v3.11\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"sync\\\")].image\"}\n]\n", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"image.openshift.io/triggers\":\"[\\n {\\\"from\\\":{\\\"kind\\\":\\\"ImageStreamTag\\\",\\\"name\\\":\\\"node:v3.11\\\"},\\\"fieldPath\\\":\\\"spec.template.spec.containers[?(@.name==\\\\\\\"sync\\\\\\\")].image\\\"}\\n]\\n\",\"kubernetes.io/description\":\"This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\\n\"},\"name\":\"sync\",\"namespace\":\"openshift-node\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"sync\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"app\":\"sync\",\"component\":\"network\",\"openshift.io/component\":\"sync\",\"type\":\"infra\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"#!/bin/bash\\nset -euo pipefail\\n\\n# set by the node image\\nunset KUBECONFIG\\n\\ntrap 'kill $(jobs -p); exit 0' TERM\\n\\n# track the current state of the config\\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\\n md5sum /etc/origin/node/node-config.yaml \\u003e /tmp/.old\\nelse\\n touch /tmp/.old\\nfi\\n\\n# loop until BOOTSTRAP_CONFIG_NAME is set\\nwhile true; do\\n file=/etc/sysconfig/origin-node\\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\\n file=/etc/sysconfig/atomic-openshift-node\\n elif [[ -f /etc/sysconfig/origin-node ]]; then\\n file=/etc/sysconfig/origin-node\\n else\\n echo \\\"info: Waiting for the node sysconfig file to be created\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n name=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"\\n if [[ -z \\\"${name}\\\" ]]; then\\n echo \\\"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026 wait\\n continue\\n fi\\n # in the background check to see if the value changes and exit if so\\n pid=$BASHPID\\n (\\n while true; do\\n if ! updated=\\\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\\\1|p' \\\"${file}\\\" | head -1)\\\"; then\\n echo \\\"error: Unable to check for bootstrap config, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 1\\n fi\\n if [[ \\\"${updated}\\\" != \\\"${name}\\\" ]]; then\\n echo \\\"info: Bootstrap configuration profile name changed, exiting\\\" 2\\u003e\\u00261\\n kill $pid\\n exit 0\\n fi\\n sleep 15\\n done\\n ) \\u0026\\n break\\ndone\\nmkdir -p /etc/origin/node/tmp\\n# periodically refresh both node-config.yaml and relabel the node\\nwhile true; do\\n if ! oc extract \\\"configmaps/${name}\\\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \\\"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\\\" \\u003e /dev/null; then\\n echo \\\"error: Unable to retrieve latest config for node\\\" 2\\u003e\\u00261\\n sleep 15 \\u0026\\n wait $!\\n continue\\n fi\\n\\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\\n if ! [[ -z \\\"$KUBELET_HOSTNAME_OVERRIDE\\\" ]]; then\\n #Patching node-config for hostname override\\n echo \\\"nodeName: $KUBELET_HOSTNAME_OVERRIDE\\\" \\u003e\\u003e /etc/origin/node/tmp/node-config.yaml\\n fi\\n\\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\\n cat /dev/null \\u003e /tmp/.old\\n fi\\n\\n md5sum /etc/origin/node/tmp/node-config.yaml \\u003e /tmp/.new\\n if [[ \\\"$( cat /tmp/.old )\\\" != \\\"$( cat /tmp/.new )\\\" ]]; then\\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\\n echo \\\"info: Configuration changed, restarting kubelet\\\" 2\\u003e\\u00261\\n # TODO: kubelet doesn't relabel nodes, best effort for now\\n # https://github.com/kubernetes/kubernetes/issues/59314\\n if args=\\\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\\\"; then\\n labels=$(tr ' ' '\\\\n' \\u003c\\u003c\\u003c$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\\\n' ' ')\\n if [[ -n \\\"${labels}\\\" ]]; then\\n echo \\\"info: Applying node labels $labels\\\" 2\\u003e\\u00261\\n if ! oc label --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" ${labels} --overwrite; then\\n echo \\\"error: Unable to apply labels, will retry in 10\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n else\\n echo \\\"error: The downloaded node configuration is invalid, retrying later\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\\n echo \\\"error: Unable to restart Kubelet\\\" 2\\u003e\\u00261\\n sleep 10 \\u0026\\n wait $!\\n continue\\n fi\\n fi\\n # annotate node with md5sum of the config\\n oc annotate --config=/etc/origin/node/node.kubeconfig \\\"node/${NODE_NAME}\\\" \\\\\\n node.openshift.io/md5sum=\\\"$( cat /tmp/.new | cut -d' ' -f1 )\\\" --overwrite\\n cp -f /tmp/.new /tmp/.old\\n sleep 180 \\u0026\\n wait $!\\ndone\\n\"],\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\" \",\"name\":\"sync\",\"securityContext\":{\"privileged\":true,\"runAsUser\":0},\"volumeMounts\":[{\"mountPath\":\"/etc/origin/node/\",\"name\":\"host-config\"},{\"mountPath\":\"/etc/sysconfig\",\"name\":\"host-sysconfig-node\",\"readOnly\":true},{\"mountPath\":\"/var/run/dbus\",\"name\":\"var-run-dbus\",\"readOnly\":true},{\"mountPath\":\"/run/systemd/system\",\"name\":\"run-systemd-system\",\"readOnly\":true}]}],\"hostNetwork\":true,\"hostPID\":true,\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"sync\",\"terminationGracePeriodSeconds\":1,\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/origin/node\"},\"name\":\"host-config\"},{\"hostPath\":{\"path\":\"/etc/sysconfig\"},\"name\":\"host-sysconfig-node\"},{\"hostPath\":{\"path\":\"/var/run/dbus\"},\"name\":\"var-run-dbus\"},{\"hostPath\":{\"path\":\"/run/systemd/system\"},\"name\":\"run-systemd-system\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"50%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/description": "This daemon set provides dynamic configuration of nodes and relabels nodes as appropriate.\n" }, "creationTimestamp": "2018-09-13T19:03:37Z", "generation": 16, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" }, "name": "sync", "namespace": "openshift-node", "resourceVersion": "93877172", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-node/daemonsets/sync", "uid": "b84d3b51-b787-11e8-9af4-005056aa3492" }, "spec": { "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "sync" } }, "template": { "metadata": { "annotations": { "scheduler.alpha.kubernetes.io/critical-pod": "" }, "creationTimestamp": null, "labels": { "app": "sync", "component": "network", "openshift.io/component": "sync", "type": "infra" } }, "spec": { "containers": [ { "command": [ "/bin/bash", "-c", "#!/bin/bash\nset -euo pipefail\n\n# set by the node image\nunset KUBECONFIG\n\ntrap 'kill $(jobs -p); exit 0' TERM\n\n# track the current state of the config\nif [[ -f /etc/origin/node/node-config.yaml ]]; then\n md5sum /etc/origin/node/node-config.yaml > /tmp/.old\nelse\n touch /tmp/.old\nfi\n\n# loop until BOOTSTRAP_CONFIG_NAME is set\nwhile true; do\n file=/etc/sysconfig/origin-node\n if [[ -f /etc/sysconfig/atomic-openshift-node ]]; then\n file=/etc/sysconfig/atomic-openshift-node\n elif [[ -f /etc/sysconfig/origin-node ]]; then\n file=/etc/sysconfig/origin-node\n else\n echo \"info: Waiting for the node sysconfig file to be created\" 2>&1\n sleep 15 & wait\n continue\n fi\n name=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"\n if [[ -z \"${name}\" ]]; then\n echo \"info: Waiting for BOOTSTRAP_CONFIG_NAME to be set\" 2>&1\n sleep 15 & wait\n continue\n fi\n # in the background check to see if the value changes and exit if so\n pid=$BASHPID\n (\n while true; do\n if ! updated=\"$(sed -nE 's|^BOOTSTRAP_CONFIG_NAME=([^#].+)|\\1|p' \"${file}\" | head -1)\"; then\n echo \"error: Unable to check for bootstrap config, exiting\" 2>&1\n kill $pid\n exit 1\n fi\n if [[ \"${updated}\" != \"${name}\" ]]; then\n echo \"info: Bootstrap configuration profile name changed, exiting\" 2>&1\n kill $pid\n exit 0\n fi\n sleep 15\n done\n ) &\n break\ndone\nmkdir -p /etc/origin/node/tmp\n# periodically refresh both node-config.yaml and relabel the node\nwhile true; do\n if ! oc extract \"configmaps/${name}\" -n openshift-node --to=/etc/origin/node/tmp --confirm --request-timeout=10s --config /etc/origin/node/node.kubeconfig \"--token=$( cat /var/run/secrets/kubernetes.io/serviceaccount/token )\" > /dev/null; then\n echo \"error: Unable to retrieve latest config for node\" 2>&1\n sleep 15 &\n wait $!\n continue\n fi\n\n KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || :\n if ! [[ -z \"$KUBELET_HOSTNAME_OVERRIDE\" ]]; then\n #Patching node-config for hostname override\n echo \"nodeName: $KUBELET_HOSTNAME_OVERRIDE\" >> /etc/origin/node/tmp/node-config.yaml\n fi\n\n # detect whether the node-config.yaml has changed, and if so trigger a restart of the kubelet.\n if [[ ! -f /etc/origin/node/node-config.yaml ]]; then\n cat /dev/null > /tmp/.old\n fi\n\n md5sum /etc/origin/node/tmp/node-config.yaml > /tmp/.new\n if [[ \"$( cat /tmp/.old )\" != \"$( cat /tmp/.new )\" ]]; then\n mv /etc/origin/node/tmp/node-config.yaml /etc/origin/node/node-config.yaml\n SYSTEMD_IGNORE_CHROOT=1 systemctl restart tuned || :\n echo \"info: Configuration changed, restarting kubelet\" 2>&1\n # TODO: kubelet doesn't relabel nodes, best effort for now\n # https://github.com/kubernetes/kubernetes/issues/59314\n if args=\"$(openshift-node-config --config /etc/origin/node/node-config.yaml)\"; then\n labels=$(tr ' ' '\\n' <<<$args | sed -ne '/^--node-labels=/ { s/^--node-labels=//; p; }' | tr ',\\n' ' ')\n if [[ -n \"${labels}\" ]]; then\n echo \"info: Applying node labels $labels\" 2>&1\n if ! oc label --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" ${labels} --overwrite; then\n echo \"error: Unable to apply labels, will retry in 10\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n else\n echo \"error: The downloaded node configuration is invalid, retrying later\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n if ! pkill -U 0 -f '(^|/)hyperkube kubelet '; then\n echo \"error: Unable to restart Kubelet\" 2>&1\n sleep 10 &\n wait $!\n continue\n fi\n fi\n # annotate node with md5sum of the config\n oc annotate --config=/etc/origin/node/node.kubeconfig \"node/${NODE_NAME}\" \\\n node.openshift.io/md5sum=\"$( cat /tmp/.new | cut -d' ' -f1 )\" --overwrite\n cp -f /tmp/.new /tmp/.old\n sleep 180 &\n wait $!\ndone\n" ], "env": [ { "name": "NODE_NAME", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "spec.nodeName" } } } ], "image": "registry.redhat.io/openshift3/ose-node:v3.11", "imagePullPolicy": "IfNotPresent", "name": "sync", "resources": {}, "securityContext": { "privileged": true, "runAsUser": 0 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/origin/node/", "name": "host-config" }, { "mountPath": "/etc/sysconfig", "name": "host-sysconfig-node", "readOnly": true }, { "mountPath": "/var/run/dbus", "name": "var-run-dbus", "readOnly": true }, { "mountPath": "/run/systemd/system", "name": "run-systemd-system", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "hostPID": true, "priorityClassName": "system-node-critical", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "sync", "serviceAccountName": "sync", "terminationGracePeriodSeconds": 1, "tolerations": [ { "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/etc/origin/node", "type": "" }, "name": "host-config" }, { "hostPath": { "path": "/etc/sysconfig", "type": "" }, "name": "host-sysconfig-node" }, { "hostPath": { "path": "/var/run/dbus", "type": "" }, "name": "var-run-dbus" }, { "hostPath": { "path": "/run/systemd/system", "type": "" }, "name": "run-systemd-system" } ] } }, "templateGeneration": 16, "updateStrategy": { "rollingUpdate": { "maxUnavailable": "50%" }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 15, "desiredNumberScheduled": 15, "numberAvailable": 13, "numberMisscheduled": 0, "numberReady": 13, "numberUnavailable": 2, "observedGeneration": 16, "updatedNumberScheduled": 14 } } ], "returncode": 0 }, "state": "list" } PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************************************************************************** localhost : ok=33 changed=0 unreachable=0 failed=0 sp-os-infra01.os.ad.scanplus.de : ok=19 changed=1 unreachable=0 failed=0 sp-os-infra02.os.ad.scanplus.de : ok=19 changed=1 unreachable=0 failed=0 sp-os-master01.os.ad.scanplus.de : ok=255 changed=52 unreachable=0 failed=1 sp-os-node02.os.ad.scanplus.de : ok=19 changed=1 unreachable=0 failed=0 sp-os-node03.os.ad.scanplus.de : ok=19 changed=1 unreachable=0 failed=0 sp-os-node04.os.ad.scanplus.de : ok=16 changed=0 unreachable=0 failed=1 sp-os-node05.os.ad.scanplus.de : ok=19 changed=1 unreachable=0 failed=0 sp-os-node06.os.ad.scanplus.de : ok=19 changed=1 unreachable=0 failed=0 sp-os-node07.os.ad.scanplus.de : ok=19 changed=1 unreachable=0 failed=0 sp-os-node08.os.ad.scanplus.de : ok=19 changed=1 unreachable=0 failed=0 sp-os-node09.os.ad.scanplus.de : ok=19 changed=1 unreachable=0 failed=0 sp-os-node10.os.ad.scanplus.de : ok=19 changed=1 unreachable=0 failed=0 sp-os-node11.os.ad.scanplus.de : ok=19 changed=1 unreachable=0 failed=0 sp-os-node12.os.ad.scanplus.de : ok=19 changed=1 unreachable=0 failed=0 INSTALLER STATUS ************************************************************************************************************************************************************************************************************************************************************************************************************ Initialization : Complete (0:06:14) Wednesday 09 January 2019 16:13:37 +0100 (0:10:25.030) 0:34:12.007 ***** =============================================================================== openshift_node_group : Wait for the sync daemonset to become ready and available ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 625.03s /usr/share/ansible/openshift-ansible/roles/openshift_node_group/tasks/sync.yml:65 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Install registry_auth dependencies --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 171.30s /usr/share/ansible/openshift-ansible/playbooks/openshift-node/private/registry_auth.yml:7 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Run variable sanity checks ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 162.26s /usr/share/ansible/openshift-ansible/playbooks/init/sanity_checks.yml:14 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- openshift_node : Create credentials for registry auth --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 97.57s /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/registry_auth.yml:16 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Upgrade all storage ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 77.76s /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:36 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Migrate storage post policy reconciliation -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 77.57s /usr/share/ansible/openshift-ansible/playbooks/openshift-master/private/upgrade.yml:182 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Ensure openshift-ansible installer package deps are installed ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 65.07s /usr/share/ansible/openshift-ansible/playbooks/init/base_packages.yml:33 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Initialize openshift.node.sdn_mtu ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 43.47s /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:60 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Gather Cluster facts ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 43.18s /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:27 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- openshift_excluder : Install docker excluder - yum ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 33.13s /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/install.yml:9 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Gathering Facts ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 32.06s /usr/share/ansible/openshift-ansible/playbooks/init/basic_facts.yml:7 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ openshift_excluder : Install openshift excluder - yum --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 31.29s /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/install.yml:34 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- openshift_excluder : Get available excluder version ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 25.20s /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:4 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- openshift_cli : Install clients ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 24.97s /usr/share/ansible/openshift-ansible/roles/openshift_cli/tasks/main.yml:2 -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- openshift_excluder : Get available excluder version ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 22.58s /usr/share/ansible/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:4 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Run health checks (upgrade) ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 22.21s /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/config.yml:45 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Install ntp package ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 13.65s /usr/share/ansible/openshift-ansible/playbooks/init/base_packages.yml:16 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- openshift_control_plane : verify API server ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 13.52s /usr/share/ansible/openshift-ansible/roles/openshift_control_plane/handlers/main.yml:13 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ openshift_cli : Install bash completion for oc tools ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 13.47s /usr/share/ansible/openshift-ansible/roles/openshift_cli/tasks/main.yml:30 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- openshift_certificate_expiry : Ensure python dateutil library is present -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 13.31s /usr/share/ansible/openshift-ansible/roles/openshift_certificate_expiry/tasks/main.yml:2 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Failure summary: 1. Hosts: sp-os-node04.os.ad.scanplus.de Play: Update registry authentication credentials Task: Create credentials for registry auth Message: 2. Hosts: sp-os-master01.os.ad.scanplus.de Play: Update sync DS Task: Wait for the sync daemonset to become ready and available Message: Failed without returning a message.