From paul.lkw at gmail.com Tue May 1 11:43:11 2018 From: paul.lkw at gmail.com (Paul.LKW) Date: Tue, 1 May 2018 19:43:11 +0800 Subject: [ovirt-users] hosted-engine --deploy Failed Message-ID: Dear All: Recently I just make a try to create a Self-Hosted Engine oVirt but unfortunately both two of my box also failed with bad deployment experience, first of all the online documentation is wrong under "oVirt Self-Hosted Engine Guide" section, it says the deployment script "hosted-engine --deploy" will asking for Storage configuration immediately but it is not true any more, also on both of my 2 box one is configured with bond interface and one not but also failed, in order to separate the issue I think better I post the bonded interface one log for all your to Ref. first, the script runs at "TASK [Wait for the host to be up]" for long long time then give me the Error [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, "attempts": 120, "changed": false} [ INFO? ] TASK [include_tasks] [ INFO? ] ok: [localhost] [ INFO? ] TASK [Remove local vm dir] [ INFO? ] changed: [localhost] [ INFO? ] TASK [Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} [ ERROR ] [DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using [ ERROR ] `result|succeeded` instead use `result is succeeded`. This feature will be [ ERROR ] removed in version 2.9. Deprecation warnings can be disabled by setting [ ERROR ] deprecation_warnings=False in ansible.cfg. [ ERROR ] [DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using [ ERROR ] `result|succeeded` instead use `result is succeeded`. This feature will be [ ERROR ] removed in version 2.9. Deprecation warnings can be disabled by setting [ ERROR ] deprecation_warnings=False in ansible.cfg. [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO? ] Stage: Clean up [ INFO? ] Cleaning temporary resources [ INFO? ] TASK [Gathering Facts] [ INFO? ] ok: [localhost] [ INFO? ] TASK [include_tasks] [ INFO? ] ok: [localhost] [ INFO? ] TASK [Remove local vm dir] [ INFO? ] ok: [localhost] [ INFO? ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20180501190540.conf' [ INFO? ] Stage: Pre-termination [ INFO? ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch. ????????? Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log Attached is the so long LOG. -------------- next part -------------- 2018-05-01 18:44:59,161+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,161+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/log=bool:'True' 2018-05-01 18:44:59,161+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFileHandle=file:'' 2018-05-01 18:44:59,161+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFileName=str:'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log' 2018-05-01 18:44:59,162+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFilter=_MyLoggerFilter:'filter' 2018-05-01 18:44:59,162+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFilterRe=list:'[<_sre.SRE_Pattern object at 0x22e9880>]' 2018-05-01 18:44:59,162+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logRemoveAtExit=bool:'False' 2018-05-01 18:44:59,162+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,162+0800 DEBUG otopi.context context._executeMethod:128 Stage boot METHOD otopi.plugins.otopi.core.misc.Plugin._init 2018-05-01 18:44:59,162+0800 DEBUG otopi.context context.dumpSequence:793 SEQUENCE DUMP - BEGIN 2018-05-01 18:44:59,162+0800 DEBUG otopi.context context.dumpSequence:795 STAGE boot 2018-05-01 18:44:59,163+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.misc.Plugin._preinit (None) 2018-05-01 18:44:59,163+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.engine.ca.Plugin._boot (None) 2018-05-01 18:44:59,163+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._boot (None) 2018-05-01 18:44:59,163+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.log.Plugin._init (otopi.core.log.init) 2018-05-01 18:44:59,163+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.misc.Plugin._init (None) 2018-05-01 18:44:59,163+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.dialog.misc.Plugin._init (otopi.dialog.misc.boot) 2018-05-01 18:44:59,163+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.system.info.Plugin._init (None) 2018-05-01 18:44:59,163+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.dialog.human.Plugin._init (None) 2018-05-01 18:44:59,163+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.dialog.machine.Plugin._init (None) 2018-05-01 18:44:59,164+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.dialog.misc.Plugin._boot_misc_done (otopi.dialog.boot.done) 2018-05-01 18:44:59,164+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._boot (None) 2018-05-01 18:44:59,164+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._boot (otopi.packagers.yum.boot) 2018-05-01 18:44:59,164+0800 DEBUG otopi.context context.dumpSequence:795 STAGE init 2018-05-01 18:44:59,164+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.config.Plugin._init (otopi.core.config.init) 2018-05-01 18:44:59,164+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._init (None) 2018-05-01 18:44:59,164+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._init (None) 2018-05-01 18:44:59,164+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.system.command.Plugin._init (None) 2018-05-01 18:44:59,165+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.packagers.core.Plugin._init (otopi.packagers.detection) 2018-05-01 18:44:59,165+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_ansiblesetup.core.misc.Plugin._init (None) 2018-05-01 18:44:59,165+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_ansiblesetup.core.storage_domain.Plugin._init (None) 2018-05-01 18:44:59,165+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.answerfile.Plugin._init (None) 2018-05-01 18:44:59,165+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.ha_notifications.Plugin._init (None) 2018-05-01 18:44:59,165+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.misc.Plugin._init (None) 2018-05-01 18:44:59,165+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.remote_answerfile.Plugin._init (None) 2018-05-01 18:44:59,165+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.vdsmconf.Plugin._init (None) 2018-05-01 18:44:59,165+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.engine.ca.Plugin._init (None) 2018-05-01 18:44:59,166+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.engine.fqdn.Plugin._init (None) 2018-05-01 18:44:59,166+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.engine.health.Plugin._init (None) 2018-05-01 18:44:59,166+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._init (None) 2018-05-01 18:44:59,166+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.network.gateway.Plugin._init (None) 2018-05-01 18:44:59,166+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.sanlock.lockspace.Plugin._init (None) 2018-05-01 18:44:59,166+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._init (None) 2018-05-01 18:44:59,166+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._init (None) 2018-05-01 18:44:59,166+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.cpu.Plugin._init (None) 2018-05-01 18:44:59,167+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.image.Plugin._init (None) 2018-05-01 18:44:59,167+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.mac.Plugin._init (None) 2018-05-01 18:44:59,167+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.machine.Plugin._init (None) 2018-05-01 18:44:59,167+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.memory.Plugin._init (None) 2018-05-01 18:44:59,167+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.transaction.Plugin._init (otopi.core.transactions.init) 2018-05-01 18:44:59,167+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.dialog.answer_file.Plugin._init (None) 2018-05-01 18:44:59,167+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.dialog.cli.Plugin._init (None) 2018-05-01 18:44:59,167+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.dialog.machine.Plugin._init_machine_events_stuff (None) 2018-05-01 18:44:59,167+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.firewalld.Plugin._init (None) 2018-05-01 18:44:59,168+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.iptables.Plugin._init (None) 2018-05-01 18:44:59,168+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.ssh.Plugin._init (None) 2018-05-01 18:44:59,168+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.system.clock.Plugin._init (None) 2018-05-01 18:44:59,168+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.system.reboot.Plugin._init (None) 2018-05-01 18:44:59,168+0800 DEBUG otopi.context context.dumpSequence:795 STAGE setup 2018-05-01 18:44:59,168+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_ansiblesetup.core.misc.Plugin._setup (None) 2018-05-01 18:44:59,168+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._setup_existence (None) 2018-05-01 18:44:59,168+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._setup_existence (None) 2018-05-01 18:44:59,168+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.shell.Plugin._setup (None) 2018-05-01 18:44:59,169+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.config.Plugin._post_init (None) 2018-05-01 18:44:59,169+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.log.Plugin._setup (None) 2018-05-01 18:44:59,169+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.misc.Plugin._setup (None) 2018-05-01 18:44:59,169+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._setup (None) 2018-05-01 18:44:59,169+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._setup (None) 2018-05-01 18:44:59,169+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.engine.fqdn.Plugin._setup (None) 2018-05-01 18:44:59,169+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._setup (None) 2018-05-01 18:44:59,169+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.network.gateway.Plugin._setup (None) 2018-05-01 18:44:59,170+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._setup (None) 2018-05-01 18:44:59,170+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._setup (None) 2018-05-01 18:44:59,170+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.firewalld.Plugin._setup (None) 2018-05-01 18:44:59,170+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.hostname.Plugin._setup (None) 2018-05-01 18:44:59,170+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.services.openrc.Plugin._setup (None) 2018-05-01 18:44:59,170+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.services.rhel.Plugin._setup (None) 2018-05-01 18:44:59,170+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.services.systemd.Plugin._setup (None) 2018-05-01 18:44:59,170+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.system.clock.Plugin._setup (None) 2018-05-01 18:44:59,170+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.system.reboot.Plugin._setup (None) 2018-05-01 18:44:59,171+0800 DEBUG otopi.context context.dumpSequence:795 STAGE internal_packages 2018-05-01 18:44:59,171+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.transaction.Plugin._pre_prepare (None) 2018-05-01 18:44:59,171+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._internal_packages (None) 2018-05-01 18:44:59,171+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._internal_packages_end (None) 2018-05-01 18:44:59,171+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._internal_packages_end (None) 2018-05-01 18:44:59,171+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.transaction.Plugin._pre_end (None) 2018-05-01 18:44:59,171+0800 DEBUG otopi.context context.dumpSequence:795 STAGE programs 2018-05-01 18:44:59,171+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._check_NM (None) 2018-05-01 18:44:59,171+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.system.command.Plugin._programs (otopi.system.command.detection) 2018-05-01 18:44:59,172+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.services.openrc.Plugin._programs (None) 2018-05-01 18:44:59,172+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.services.rhel.Plugin._programs (None) 2018-05-01 18:44:59,172+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.services.systemd.Plugin._programs (None) 2018-05-01 18:44:59,172+0800 DEBUG otopi.context context.dumpSequence:795 STAGE late_setup 2018-05-01 18:44:59,172+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.vdsmconf.Plugin._late_setup (ohosted.vdsm.conf.loaded) 2018-05-01 18:44:59,172+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._late_setup (None) 2018-05-01 18:44:59,172+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.misc.Plugin._late_setup (ohosted.core.check.maintenance.mode) 2018-05-01 18:44:59,172+0800 DEBUG otopi.context context.dumpSequence:795 STAGE customization 2018-05-01 18:44:59,173+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.firewalld.Plugin._customization (None) 2018-05-01 18:44:59,173+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.config.Plugin._customize1 (None) 2018-05-01 18:44:59,173+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.titles.Plugin._storage_start (ohosted.dialog.titles.storage.start) 2018-05-01 18:44:59,173+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.dialog.cli.Plugin._customize (otopi.dialog.cli.customization) 2018-05-01 18:44:59,173+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.remote_answerfile.Plugin._customization (ohosted.core.require.answerfile) 2018-05-01 18:44:59,173+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.titles.Plugin._storage_end (ohosted.dialog.titles.storage.end) 2018-05-01 18:44:59,173+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.engine.ca.Plugin._validation (ohosted.engine.ca.acquired.customization) 2018-05-01 18:44:59,173+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.titles.Plugin._network_start (ohosted.dialog.titles.network.start) 2018-05-01 18:44:59,173+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._detect_bridges (ohosted.network.bridge.detected) 2018-05-01 18:44:59,174+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.network.gateway.Plugin._customization (ohosted.networking.gateway.configuration.available) 2018-05-01 18:44:59,174+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._customization (None) 2018-05-01 18:44:59,174+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._get_existing_bridge_interface (None) 2018-05-01 18:44:59,174+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.titles.Plugin._network_end (ohosted.dialog.titles.network.end) 2018-05-01 18:44:59,174+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.titles.Plugin._vm_start (ohosted.dialog.titles.vm.start) 2018-05-01 18:44:59,174+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._customization_ansible (ohosted.configuration.ovf.ansible) 2018-05-01 18:44:59,174+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._customization (ohosted.configuration.ovf) 2018-05-01 18:44:59,174+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._customization (ohosted.boot.configuration.cloud_init_options) 2018-05-01 18:44:59,174+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.cpu.Plugin._customization (ohosted.vm.cpu.model.number) 2018-05-01 18:44:59,175+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.image.Plugin._disk_customization (None) 2018-05-01 18:44:59,175+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.memory.Plugin._customization (None) 2018-05-01 18:44:59,175+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.mac.Plugin._customization (ohosted.vm.mac.customization) 2018-05-01 18:44:59,175+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._customize_vm_networking (ohosted.boot.configuration.cloud_init_vm_networking) 2018-05-01 18:44:59,175+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.titles.Plugin._vm_end (ohosted.dialog.titles.vm.end) 2018-05-01 18:44:59,175+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.titles.Plugin._engine_start (ohosted.dialog.titles.engine.start) 2018-05-01 18:44:59,175+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.ha_notifications.Plugin._customization (None) 2018-05-01 18:44:59,175+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.engine.ca.Plugin._customization (None) 2018-05-01 18:44:59,176+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.engine.fqdn.Plugin._customization (None) 2018-05-01 18:44:59,176+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.titles.Plugin._engine_end (ohosted.dialog.titles.engine.end) 2018-05-01 18:44:59,176+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.config.Plugin._customize2 (None) 2018-05-01 18:44:59,176+0800 DEBUG otopi.context context.dumpSequence:795 STAGE validation 2018-05-01 18:44:59,176+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.misc.Plugin._validation (None) 2018-05-01 18:44:59,176+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._get_hostname_from_bridge_if (ohosted.network.hostname.got) 2018-05-01 18:44:59,176+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.log.Plugin._validation (None) 2018-05-01 18:44:59,176+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.firewalld.Plugin._validation (otopi.network.firewalld.validation) 2018-05-01 18:44:59,176+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.hostname.Plugin._validation (None) 2018-05-01 18:44:59,177+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.iptables.Plugin._validate (otopi.network.iptables.validation) 2018-05-01 18:44:59,177+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.ssh.Plugin._validation (None) 2018-05-01 18:44:59,177+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._validate_hostname_first_host (None) 2018-05-01 18:44:59,177+0800 DEBUG otopi.context context.dumpSequence:795 STAGE transaction-prepare 2018-05-01 18:44:59,177+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.transaction.Plugin._main_prepare (None) 2018-05-01 18:44:59,177+0800 DEBUG otopi.context context.dumpSequence:795 STAGE early_misc 2018-05-01 18:44:59,177+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.firewalld.Plugin._early_misc (None) 2018-05-01 18:44:59,177+0800 DEBUG otopi.context context.dumpSequence:795 STAGE packages 2018-05-01 18:44:59,177+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.iptables.Plugin._packages (None) 2018-05-01 18:44:59,178+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._packages (None) 2018-05-01 18:44:59,178+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._packages (None) 2018-05-01 18:44:59,178+0800 DEBUG otopi.context context.dumpSequence:795 STAGE misc 2018-05-01 18:44:59,178+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.misc.Plugin._misc_reached (None) 2018-05-01 18:44:59,178+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.system.command.Plugin._misc (otopi.system.command.redetection) 2018-05-01 18:44:59,178+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.ha_notifications.Plugin._misc (ohosted.notifications.broker.conf.available) 2018-05-01 18:44:59,178+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._misc (ohosted.network.bridge.available) 2018-05-01 18:44:59,178+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.sanlock.lockspace.Plugin._misc (ohosted.sanlock.initialized) 2018-05-01 18:44:59,179+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._misc_backup_disk (ohosted.upgrade.disk.backup.saved) 2018-05-01 18:44:59,179+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._misc (None) 2018-05-01 18:44:59,179+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.firewalld.Plugin._misc (None) 2018-05-01 18:44:59,179+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.iptables.Plugin._store_iptables (None) 2018-05-01 18:44:59,179+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.ssh.Plugin._append_key (None) 2018-05-01 18:44:59,179+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.system.clock.Plugin._set_clock (None) 2018-05-01 18:44:59,179+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.image.Plugin._misc (ohosted.vm.image.available) 2018-05-01 18:44:59,179+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._misc (ohosted.vm.ovf.imported) 2018-05-01 18:44:59,179+0800 DEBUG otopi.context context.dumpSequence:795 STAGE cleanup 2018-05-01 18:44:59,180+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.transaction.Plugin._main_end (None) 2018-05-01 18:44:59,180+0800 DEBUG otopi.context context.dumpSequence:795 STAGE closeup 2018-05-01 18:44:59,180+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_ansiblesetup.core.misc.Plugin._closeup (ohosted.ansible.bootstrap.local.vm) 2018-05-01 18:44:59,180+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.answerfile.Plugin._closeup (ohosted.notifications.answerfile.available) 2018-05-01 18:44:59,180+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.misc.Plugin._persist_files_start (ohosted.node.files.persist.start) 2018-05-01 18:44:59,180+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.misc.Plugin.engine_vm_up_check (ohosted.engine.vm.up.check) 2018-05-01 18:44:59,180+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.engine.health.Plugin._closeup (ohosted.engine.alive) 2018-05-01 18:44:59,180+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._closeup (None) 2018-05-01 18:44:59,180+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.misc.Plugin._closeup (ohosted.vm.state.shutdown) 2018-05-01 18:44:59,181+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.firewalld.Plugin._closeup (None) 2018-05-01 18:44:59,181+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.network.iptables.Plugin._closeup (None) 2018-05-01 18:44:59,181+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_ansiblesetup.core.storage_domain.Plugin._closeup (ohosted.ansible.create.storage.domain) 2018-05-01 18:44:59,181+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.misc.Plugin._persist_files_end (ohosted.node.files.persist.end) 2018-05-01 18:44:59,181+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.engine.ca.Plugin._closeup (ohosted.engine.ca.acquired.closeup) 2018-05-01 18:44:59,181+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.image.Plugin._closeup_ansible (ohosted.ansible.disk.customized) 2018-05-01 18:44:59,181+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_ansiblesetup.core.target_vm.Plugin._closeup (ohosted.ansible.create.target.vm) 2018-05-01 18:44:59,181+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.system.reboot.Plugin._closeup (None) 2018-05-01 18:44:59,181+0800 DEBUG otopi.context context.dumpSequence:795 STAGE cleanup 2018-05-01 18:44:59,182+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_ansiblesetup.core.misc.Plugin._cleanup (None) 2018-05-01 18:44:59,182+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.engine.ca.Plugin._cleanup (None) 2018-05-01 18:44:59,182+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._cleanup (None) 2018-05-01 18:44:59,182+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._cleanup (None) 2018-05-01 18:44:59,182+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.dialog.answer_file.Plugin._generate_answer_file (otopi.core.answer.file.generated) 2018-05-01 18:44:59,182+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.answerfile.Plugin._save_answers_at_cleanup (None) 2018-05-01 18:44:59,182+0800 DEBUG otopi.context context.dumpSequence:795 STAGE pre-terminate 2018-05-01 18:44:59,182+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.misc.Plugin._preTerminate (None) 2018-05-01 18:44:59,183+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate (otopi.dialog.cli.termination) 2018-05-01 18:44:59,183+0800 DEBUG otopi.context context.dumpSequence:795 STAGE terminate 2018-05-01 18:44:59,183+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.gr_he_common.core.misc.Plugin._terminate (None) 2018-05-01 18:44:59,183+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate (None) 2018-05-01 18:44:59,183+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate (None) 2018-05-01 18:44:59,183+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.core.log.Plugin._terminate (None) 2018-05-01 18:44:59,183+0800 DEBUG otopi.context context.dumpSequence:795 STAGE reboot 2018-05-01 18:44:59,183+0800 DEBUG otopi.context context.dumpSequence:800 METHOD otopi.plugins.otopi.system.reboot.Plugin._reboot (None) 2018-05-01 18:44:59,183+0800 DEBUG otopi.context context.dumpSequence:802 SEQUENCE DUMP - END 2018-05-01 18:44:59,184+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,184+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/aborted=bool:'False' 2018-05-01 18:44:59,184+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/debug=int:'0' 2018-05-01 18:44:59,184+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' 2018-05-01 18:44:59,184+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[]' 2018-05-01 18:44:59,184+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/executionDirectory=str:'/root' 2018-05-01 18:44:59,184+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exitCode=list:'[{'priority': 90001, 'code': 0}]' 2018-05-01 18:44:59,184+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/log=bool:'True' 2018-05-01 18:44:59,185+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/pluginGroups=str:'otopi:gr-he-common:gr-he-ansiblesetup' 2018-05-01 18:44:59,185+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/pluginPath=str:'/usr/share/otopi/plugins:/usr/share/ovirt-hosted-engine-setup/scripts/../plugins' 2018-05-01 18:44:59,185+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/suppressEnvironmentKeys=list:'[]' 2018-05-01 18:44:59,185+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/configFileName=str:'/etc/ovirt-hosted-engine-setup.conf' 2018-05-01 18:44:59,185+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True' 2018-05-01 18:44:59,185+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logDir=str:'/var/log/ovirt-hosted-engine-setup' 2018-05-01 18:44:59,185+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFileHandle=file:'' 2018-05-01 18:44:59,185+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFileName=str:'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log' 2018-05-01 18:44:59,185+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFileNamePrefix=str:'ovirt-hosted-engine-setup' 2018-05-01 18:44:59,186+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFilter=_MyLoggerFilter:'filter' 2018-05-01 18:44:59,186+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFilterKeys=list:'['OVEHOSTED_ENGINE/adminPassword', 'OVEHOSTED_VM/cloudinitRootPwd']' 2018-05-01 18:44:59,186+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFilterRe=list:'[<_sre.SRE_Pattern object at 0x22e9880>]' 2018-05-01 18:44:59,186+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logRemoveAtExit=bool:'False' 2018-05-01 18:44:59,186+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/randomizeEvents=bool:'False' 2018-05-01 18:44:59,186+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV INFO/PACKAGE_NAME=str:'otopi' 2018-05-01 18:44:59,186+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV INFO/PACKAGE_VERSION=str:'1.7.7' 2018-05-01 18:44:59,186+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,187+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,187+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV INFO/PACKAGE_NAME=str:'otopi' 2018-05-01 18:44:59,187+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV INFO/PACKAGE_VERSION=str:'1.7.7' 2018-05-01 18:44:59,187+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,187+0800 DEBUG otopi.context context._executeMethod:128 Stage boot METHOD otopi.plugins.otopi.dialog.misc.Plugin._init 2018-05-01 18:44:59,187+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,187+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/dialect=str:'human' 2018-05-01 18:44:59,188+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,188+0800 DEBUG otopi.context context._executeMethod:128 Stage boot METHOD otopi.plugins.otopi.system.info.Plugin._init 2018-05-01 18:44:59,188+0800 DEBUG otopi.plugins.otopi.system.info info._init:39 SYSTEM INFORMATION - BEGIN 2018-05-01 18:44:59,188+0800 DEBUG otopi.plugins.otopi.system.info info._init:40 executable /bin/python 2018-05-01 18:44:59,188+0800 DEBUG otopi.plugins.otopi.system.info info._init:41 python /bin/python 2018-05-01 18:44:59,188+0800 DEBUG otopi.plugins.otopi.system.info info._init:42 platform linux2 2018-05-01 18:44:59,189+0800 DEBUG otopi.plugins.otopi.system.info info._init:43 distribution ('CentOS Linux', '7.4.1708', 'Core') 2018-05-01 18:44:59,189+0800 DEBUG otopi.plugins.otopi.system.info info._init:44 host 'STORAGE' 2018-05-01 18:44:59,189+0800 DEBUG otopi.plugins.otopi.system.info info._init:50 uid 0 euid 0 gid 0 egid 0 2018-05-01 18:44:59,189+0800 DEBUG otopi.plugins.otopi.system.info info._init:52 SYSTEM INFORMATION - END 2018-05-01 18:44:59,189+0800 DEBUG otopi.context context._executeMethod:128 Stage boot METHOD otopi.plugins.otopi.dialog.human.Plugin._init 2018-05-01 18:44:59,190+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,190+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/autoAcceptDefault=bool:'False' 2018-05-01 18:44:59,190+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/boundary=str:'--=451b80dc-996f-432e-9e4f-2b29ef6d1141=--' 2018-05-01 18:44:59,190+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,190+0800 DEBUG otopi.context context._executeMethod:128 Stage boot METHOD otopi.plugins.otopi.dialog.machine.Plugin._init 2018-05-01 18:44:59,190+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:44:59,191+0800 DEBUG otopi.context context._executeMethod:128 Stage boot METHOD otopi.plugins.otopi.dialog.misc.Plugin._boot_misc_done 2018-05-01 18:44:59,191+0800 DEBUG otopi.context context._executeMethod:128 Stage boot METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._boot 2018-05-01 18:44:59,192+0800 DEBUG otopi.plugins.otopi.packagers.dnfpackager dnfpackager._boot:173 Cannot initialize minidnf 2018-05-01 18:44:59,192+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,192+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/dnfDisabledPlugins=list:'[]' 2018-05-01 18:44:59,192+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/dnfExpireCache=bool:'True' 2018-05-01 18:44:59,192+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/dnfRollback=bool:'True' 2018-05-01 18:44:59,192+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/dnfpackagerEnabled=bool:'True' 2018-05-01 18:44:59,192+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/keepAliveInterval=int:'30' 2018-05-01 18:44:59,193+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,193+0800 DEBUG otopi.context context._executeMethod:128 Stage boot METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._boot Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. 2018-05-01 18:44:59,340+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,340+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumDisabledPlugins=list:'[]' 2018-05-01 18:44:59,340+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumEnabledPlugins=list:'[]' 2018-05-01 18:44:59,340+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumExpireCache=bool:'True' 2018-05-01 18:44:59,340+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumRollback=bool:'True' 2018-05-01 18:44:59,340+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumpackagerEnabled=bool:'True' 2018-05-01 18:44:59,340+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,341+0800 INFO otopi.context context.runSequence:741 Stage: Initializing 2018-05-01 18:44:59,341+0800 DEBUG otopi.context context.runSequence:745 STAGE init 2018-05-01 18:44:59,341+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.otopi.core.config.Plugin._init 2018-05-01 18:44:59,341+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,342+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/configFileAppend=NoneType:'None' 2018-05-01 18:44:59,342+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,342+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._init 2018-05-01 18:44:59,342+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:44:59,343+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._init 2018-05-01 18:44:59,343+0800 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager._init:199 Registering yum packager 2018-05-01 18:44:59,343+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.otopi.system.command.Plugin._init 2018-05-01 18:44:59,344+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,344+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/commandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/Arcconf:/root/bin:/usr/Arcconf' 2018-05-01 18:44:59,344+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,344+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.otopi.packagers.core.Plugin._init 2018-05-01 18:44:59,345+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_ansiblesetup.core.misc.Plugin._init 2018-05-01 18:44:59,345+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,345+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/localVMDir=NoneType:'None' 2018-05-01 18:44:59,345+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/enableHcGlusterService=NoneType:'None' 2018-05-01 18:44:59,345+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/localVmUUID=str:'72921200-b111-4515-96a0-19524dd65141' 2018-05-01 18:44:59,345+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,346+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_ansiblesetup.core.storage_domain.Plugin._init 2018-05-01 18:44:59,346+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,346+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/LunID=NoneType:'None' 2018-05-01 18:44:59,346+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/domainType=NoneType:'None' 2018-05-01 18:44:59,346+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIDiscoverPassword=NoneType:'None' 2018-05-01 18:44:59,347+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIDiscoverUser=NoneType:'None' 2018-05-01 18:44:59,347+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortal=NoneType:'None' 2018-05-01 18:44:59,347+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortalIPAddress=NoneType:'None' 2018-05-01 18:44:59,347+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortalPassword=NoneType:'None' 2018-05-01 18:44:59,347+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortalPort=NoneType:'None' 2018-05-01 18:44:59,347+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortalUser=NoneType:'None' 2018-05-01 18:44:59,347+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSITargetName=NoneType:'None' 2018-05-01 18:44:59,347+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/mntOptions=NoneType:'None' 2018-05-01 18:44:59,347+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/nfsVersion=NoneType:'None' 2018-05-01 18:44:59,348+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/storageDomainConnection=NoneType:'None' 2018-05-01 18:44:59,348+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/storageDomainName=str:'hosted_storage' 2018-05-01 18:44:59,348+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,348+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.core.answerfile.Plugin._init 2018-05-01 18:44:59,348+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,349+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/etcAnswerFile=str:'/etc/ovirt-hosted-engine/answers.conf' 2018-05-01 18:44:59,349+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/userAnswerFile=NoneType:'None' 2018-05-01 18:44:59,349+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,349+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.core.ha_notifications.Plugin._init 2018-05-01 18:44:59,350+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,350+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/destEmail=NoneType:'None' 2018-05-01 18:44:59,350+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/smtpPort=NoneType:'None' 2018-05-01 18:44:59,350+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/smtpServer=NoneType:'None' 2018-05-01 18:44:59,350+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/sourceEmail=NoneType:'None' 2018-05-01 18:44:59,350+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,351+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.core.misc.Plugin._init 2018-05-01 18:44:59,351+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,351+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/ansibleDeployment=bool:'False' 2018-05-01 18:44:59,351+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/deployProceed=NoneType:'None' 2018-05-01 18:44:59,351+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/miscReached=bool:'False' 2018-05-01 18:44:59,351+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/nodeSetup=bool:'False' 2018-05-01 18:44:59,352+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/rollbackProceed=NoneType:'None' 2018-05-01 18:44:59,352+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/rollbackUpgrade=bool:'False' 2018-05-01 18:44:59,352+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/upgradeProceed=NoneType:'None' 2018-05-01 18:44:59,352+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/upgradingAppliance=bool:'False' 2018-05-01 18:44:59,352+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/clusterName=NoneType:'None' 2018-05-01 18:44:59,352+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,353+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.core.remote_answerfile.Plugin._init 2018-05-01 18:44:59,353+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,353+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_FIRST_HOST/deployWithHE35Hosts=NoneType:'None' 2018-05-01 18:44:59,353+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_FIRST_HOST/skipSharedStorageAF=bool:'False' 2018-05-01 18:44:59,353+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,354+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.core.vdsmconf.Plugin._init 2018-05-01 18:44:59,354+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,354+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/kvmGid=int:'36' 2018-05-01 18:44:59,355+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/serviceName=str:'vdsmd' 2018-05-01 18:44:59,355+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/useSSL=bool:'True' 2018-05-01 18:44:59,355+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/vdscli=NoneType:'None' 2018-05-01 18:44:59,355+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/vdsmUid=int:'36' 2018-05-01 18:44:59,355+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,356+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.engine.ca.Plugin._init 2018-05-01 18:44:59,356+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,356+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/adminPassword=NoneType:'None' 2018-05-01 18:44:59,356+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/adminUsername=str:'admin at internal' 2018-05-01 18:44:59,356+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/insecureSSL=NoneType:'None' 2018-05-01 18:44:59,356+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/interactiveAdminPassword=bool:'True' 2018-05-01 18:44:59,356+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/temporaryCertificate=NoneType:'None' 2018-05-01 18:44:59,357+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,357+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.engine.fqdn.Plugin._init 2018-05-01 18:44:59,358+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,358+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/fqdn=NoneType:'None' 2018-05-01 18:44:59,358+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,358+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.engine.health.Plugin._init 2018-05-01 18:44:59,359+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,359+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/engineSetupTimeout=int:'1800' 2018-05-01 18:44:59,359+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,360+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._init 2018-05-01 18:44:59,360+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,360+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/appHostName=NoneType:'None' 2018-05-01 18:44:59,360+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/allowInvalidBondModes=bool:'False' 2018-05-01 18:44:59,360+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/bridgeIf=NoneType:'None' 2018-05-01 18:44:59,360+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/bridgeName=str:'ovirtmgmt' 2018-05-01 18:44:59,361+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/fqdnReverseValidation=bool:'False' 2018-05-01 18:44:59,361+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/host_name=NoneType:'None' 2018-05-01 18:44:59,361+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/refuseDeployingWithNM=bool:'False' 2018-05-01 18:44:59,361+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,362+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.network.gateway.Plugin._init 2018-05-01 18:44:59,362+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,362+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/gateway=NoneType:'None' 2018-05-01 18:44:59,362+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,363+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.sanlock.lockspace.Plugin._init 2018-05-01 18:44:59,363+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,363+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_SANLOCK/lockspaceName=str:'hosted-engine' 2018-05-01 18:44:59,363+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_SANLOCK/serviceName=str:'sanlock' 2018-05-01 18:44:59,364+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/lockspaceImageUUID=NoneType:'None' 2018-05-01 18:44:59,364+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/lockspaceVolumeUUID=NoneType:'None' 2018-05-01 18:44:59,364+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/metadataImageUUID=NoneType:'None' 2018-05-01 18:44:59,364+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/metadataVolumeUUID=NoneType:'None' 2018-05-01 18:44:59,364+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/createLMVolumes=bool:'False' 2018-05-01 18:44:59,364+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,365+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._init 2018-05-01 18:44:59,365+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,365+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/tempDir=str:'/var/tmp' 2018-05-01 18:44:59,366+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/ovfSizeGB=NoneType:'None' 2018-05-01 18:44:59,366+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/backupFileName=NoneType:'None' 2018-05-01 18:44:59,366+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/backupImgUUID=NoneType:'None' 2018-05-01 18:44:59,366+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/backupVolUUID=NoneType:'None' 2018-05-01 18:44:59,366+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/dstBackupFileName=NoneType:'None' 2018-05-01 18:44:59,366+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/acceptDownloadEApplianceRPM=NoneType:'None' 2018-05-01 18:44:59,366+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/applianceVersion=NoneType:'None' 2018-05-01 18:44:59,366+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/ovfArchive=NoneType:'None' 2018-05-01 18:44:59,367+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,367+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._init 2018-05-01 18:44:59,367+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,368+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/enableLibgfapi=NoneType:'None' 2018-05-01 18:44:59,368+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/automateVMShutdown=NoneType:'None' 2018-05-01 18:44:59,368+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudInitISO=NoneType:'None' 2018-05-01 18:44:59,368+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitExecuteEngineSetup=NoneType:'None' 2018-05-01 18:44:59,368+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitHostIP=NoneType:'None' 2018-05-01 18:44:59,368+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitInstanceDomainName=NoneType:'None' 2018-05-01 18:44:59,368+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitInstanceHostName=NoneType:'None' 2018-05-01 18:44:59,369+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitRootPwd=NoneType:'None' 2018-05-01 18:44:59,369+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMDNS=NoneType:'None' 2018-05-01 18:44:59,369+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMETCHOSTS=NoneType:'None' 2018-05-01 18:44:59,369+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMStaticCIDR=NoneType:'None' 2018-05-01 18:44:59,369+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMTZ=NoneType:'None' 2018-05-01 18:44:59,369+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/rootSshAccess=NoneType:'None' 2018-05-01 18:44:59,369+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/rootSshPubkey=NoneType:'None' 2018-05-01 18:44:59,369+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,370+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.vm.cpu.Plugin._init 2018-05-01 18:44:59,371+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,371+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/applianceVCpus=NoneType:'None' 2018-05-01 18:44:59,371+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/maxVCpus=NoneType:'None' 2018-05-01 18:44:59,371+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmVCpus=NoneType:'None' 2018-05-01 18:44:59,371+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,372+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.vm.image.Plugin._init 2018-05-01 18:44:59,372+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,372+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/blockDeviceSizeGB=NoneType:'None' 2018-05-01 18:44:59,373+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/imgDesc=str:'Hosted Engine Image' 2018-05-01 18:44:59,373+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/imgSizeGB=NoneType:'None' 2018-05-01 18:44:59,373+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/imgUUID=str:'a8d46c38-af05-4a19-afd9-11e45cb26942' 2018-05-01 18:44:59,373+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/volUUID=str:'4de5c53d-9cd6-4897-b6a4-6ec2c8bbffb3' 2018-05-01 18:44:59,373+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/backupImgSizeGB=NoneType:'None' 2018-05-01 18:44:59,373+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,374+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.vm.mac.Plugin._init 2018-05-01 18:44:59,375+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,375+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmMACAddr=NoneType:'None' 2018-05-01 18:44:59,375+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,376+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.vm.machine.Plugin._init 2018-05-01 18:44:59,376+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,376+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cdromUUID=str:'f90cef5e-1d7d-47ae-b3f0-34cabe5a9ff3' 2018-05-01 18:44:59,377+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/consoleUUID=str:'61d2a2c6-bf6c-441b-b1a9-b77c8e4fc814' 2018-05-01 18:44:59,377+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/emulatedMachine=str:'pc' 2018-05-01 18:44:59,377+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/nicUUID=str:'1b2fd7c7-0200-4bbf-afa8-2337aa0dfac8' 2018-05-01 18:44:59,377+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,378+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.gr_he_common.vm.memory.Plugin._init 2018-05-01 18:44:59,378+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,378+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/applianceMem=NoneType:'None' 2018-05-01 18:44:59,378+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmMemSizeMB=NoneType:'None' 2018-05-01 18:44:59,379+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,379+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.otopi.core.transaction.Plugin._init 2018-05-01 18:44:59,380+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,380+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/internalPackageTransaction=Transaction:'transaction' 2018-05-01 18:44:59,380+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/mainTransaction=Transaction:'transaction' 2018-05-01 18:44:59,380+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/modifiedFiles=list:'[]' 2018-05-01 18:44:59,381+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,381+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.otopi.dialog.answer_file.Plugin._init 2018-05-01 18:44:59,381+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,382+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/answerFile=NoneType:'None' 2018-05-01 18:44:59,382+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,383+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.otopi.dialog.cli.Plugin._init 2018-05-01 18:44:59,383+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,383+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/cliVersion=int:'1' 2018-05-01 18:44:59,383+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/customization=bool:'False' 2018-05-01 18:44:59,384+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,385+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.otopi.dialog.machine.Plugin._init_machine_events_stuff 2018-05-01 18:44:59,385+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:44:59,386+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.otopi.network.firewalld.Plugin._init 2018-05-01 18:44:59,386+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,386+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/firewalldAvailable=bool:'False' 2018-05-01 18:44:59,386+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/firewalldDisableServices=list:'[]' 2018-05-01 18:44:59,387+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/firewalldEnable=bool:'False' 2018-05-01 18:44:59,387+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,388+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.otopi.network.iptables.Plugin._init 2018-05-01 18:44:59,388+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,388+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/iptablesEnable=bool:'False' 2018-05-01 18:44:59,388+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/iptablesRules=NoneType:'None' 2018-05-01 18:44:59,389+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,390+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.otopi.network.ssh.Plugin._init 2018-05-01 18:44:59,390+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,390+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/sshEnable=bool:'False' 2018-05-01 18:44:59,390+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/sshKey=NoneType:'None' 2018-05-01 18:44:59,390+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/sshUser=str:'' 2018-05-01 18:44:59,391+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,391+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.otopi.system.clock.Plugin._init 2018-05-01 18:44:59,392+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,392+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/clockMaxGap=int:'5' 2018-05-01 18:44:59,393+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/clockSet=bool:'False' 2018-05-01 18:44:59,393+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,393+0800 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.otopi.system.reboot.Plugin._init 2018-05-01 18:44:59,394+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:44:59,394+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/reboot=bool:'False' 2018-05-01 18:44:59,394+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/rebootAllow=bool:'True' 2018-05-01 18:44:59,394+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/rebootDeferTime=int:'10' 2018-05-01 18:44:59,395+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:44:59,395+0800 INFO otopi.context context.runSequence:741 Stage: Environment setup 2018-05-01 18:44:59,395+0800 DEBUG otopi.context context.runSequence:745 STAGE setup 2018-05-01 18:44:59,396+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.gr_he_ansiblesetup.core.misc.Plugin._setup 2018-05-01 18:44:59,396+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND During customization use CTRL-D to abort. 2018-05-01 18:44:59,396+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query DEPLOY_PROCEED 2018-05-01 18:44:59,397+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Continuing will configure this host for serving as hypervisor and create a local VM with a running engine. 2018-05-01 18:44:59,397+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND The locally running engine will be used to configure a storage domain and create a VM there. 2018-05-01 18:44:59,397+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND At the end the disk of the local VM will be moved to the shared storage. 2018-05-01 18:44:59,397+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Are you sure you want to continue? (Yes, No)[Yes]: 2018-05-01 18:45:07,189+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:07,189+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/ansibleDeployment=bool:'True' 2018-05-01 18:45:07,189+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/checkRequirements=bool:'True' 2018-05-01 18:45:07,190+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/deployProceed=bool:'True' 2018-05-01 18:45:07,191+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmCDRom=NoneType:'None' 2018-05-01 18:45:07,192+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DEPLOY_PROCEED=str:'yes' 2018-05-01 18:45:07,192+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:07,195+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._setup_existence 2018-05-01 18:45:07,195+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:45:07,199+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._setup_existence 2018-05-01 18:45:07,203+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.gr_he_common.core.shell.Plugin._setup 2018-05-01 18:45:07,205+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:07,205+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/screenProceed=NoneType:'None' 2018-05-01 18:45:07,205+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/skipTTYCheck=bool:'False' 2018-05-01 18:45:07,207+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:07,209+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.core.config.Plugin._post_init 2018-05-01 18:45:07,210+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Configuration files: [] 2018-05-01 18:45:07,214+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.core.log.Plugin._setup 2018-05-01 18:45:07,215+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log 2018-05-01 18:45:07,219+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.core.misc.Plugin._setup 2018-05-01 18:45:07,220+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Version: otopi-1.7.7 (otopi-1.7.7-1.el7.centos) 2018-05-01 18:45:07,225+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._setup 2018-05-01 18:45:07,225+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:45:07,229+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._setup 2018-05-01 18:45:07,233+0800 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Cleaning caches: ['expire-cache']. 2018-05-01 18:45:07,244+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.gr_he_common.engine.fqdn.Plugin._setup 2018-05-01 18:45:07,245+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:07,246+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/dig=NoneType:'None' 2018-05-01 18:45:07,246+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ip=NoneType:'None' 2018-05-01 18:45:07,248+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:07,250+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._setup 2018-05-01 18:45:07,255+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.gr_he_common.network.gateway.Plugin._setup 2018-05-01 18:45:07,256+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:07,256+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ping=NoneType:'None' 2018-05-01 18:45:07,258+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:07,260+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._setup 2018-05-01 18:45:07,261+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:45:07,265+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._setup 2018-05-01 18:45:07,266+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:07,266+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/genisoimage=NoneType:'None' 2018-05-01 18:45:07,267+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ssh-keygen=NoneType:'None' 2018-05-01 18:45:07,269+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:07,271+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.network.firewalld.Plugin._setup 2018-05-01 18:45:07,272+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:07,272+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/firewall-cmd=NoneType:'None' 2018-05-01 18:45:07,274+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:07,277+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.network.hostname.Plugin._setup 2018-05-01 18:45:07,281+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.services.openrc.Plugin._setup 2018-05-01 18:45:07,282+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:07,282+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/rc=NoneType:'None' 2018-05-01 18:45:07,283+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/rc-update=NoneType:'None' 2018-05-01 18:45:07,285+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:07,287+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.services.rhel.Plugin._setup 2018-05-01 18:45:07,288+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:07,288+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/chkconfig=NoneType:'None' 2018-05-01 18:45:07,289+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/initctl=NoneType:'None' 2018-05-01 18:45:07,289+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/service=NoneType:'None' 2018-05-01 18:45:07,289+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/systemctl=NoneType:'None' 2018-05-01 18:45:07,292+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:07,294+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.services.systemd.Plugin._setup 2018-05-01 18:45:07,298+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.system.clock.Plugin._setup 2018-05-01 18:45:07,299+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:07,299+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/chronyc=NoneType:'None' 2018-05-01 18:45:07,300+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/date=NoneType:'None' 2018-05-01 18:45:07,300+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/hwclock=NoneType:'None' 2018-05-01 18:45:07,301+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ntpq=NoneType:'None' 2018-05-01 18:45:07,303+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:07,305+0800 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.system.reboot.Plugin._setup 2018-05-01 18:45:07,306+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:07,306+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/reboot=NoneType:'None' 2018-05-01 18:45:07,309+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:07,309+0800 INFO otopi.context context.runSequence:741 Stage: Environment packages setup 2018-05-01 18:45:07,310+0800 DEBUG otopi.context context.runSequence:745 STAGE internal_packages 2018-05-01 18:45:07,312+0800 DEBUG otopi.context context._executeMethod:128 Stage internal_packages METHOD otopi.plugins.otopi.core.transaction.Plugin._pre_prepare 2018-05-01 18:45:07,313+0800 DEBUG otopi.transaction transaction._prepare:61 preparing 'Yum Transaction' Loaded plugins: fastestmirror, product-id, subscription-manager 2018-05-01 18:45:07,345+0800 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Connection built: host=subscription.rhsm.redhat.com port=443 handler=/subscription auth=identity_cert ca_dir=/etc/rhsm/ca/ insecure=False This system is not registered with an entitlement server. You can use subscription-manager to register. 2018-05-01 18:45:07,469+0800 DEBUG otopi.context context._executeMethod:128 Stage internal_packages METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._internal_packages 2018-05-01 18:45:07,470+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:45:07,471+0800 DEBUG otopi.context context._executeMethod:128 Stage internal_packages METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._internal_packages_end 2018-05-01 18:45:07,471+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:45:07,473+0800 DEBUG otopi.context context._executeMethod:128 Stage internal_packages METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._internal_packages_end 2018-05-01 18:45:07,473+0800 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Building transaction 2018-05-01 18:45:07,503+0800 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Empty transaction 2018-05-01 18:45:07,503+0800 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Transaction Summary: 2018-05-01 18:45:07,505+0800 DEBUG otopi.context context._executeMethod:128 Stage internal_packages METHOD otopi.plugins.otopi.core.transaction.Plugin._pre_end 2018-05-01 18:45:07,505+0800 DEBUG otopi.transaction transaction.commit:147 committing 'Yum Transaction' Loaded plugins: fastestmirror, product-id, subscription-manager 2018-05-01 18:45:07,516+0800 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Connection built: host=subscription.rhsm.redhat.com port=443 handler=/subscription auth=identity_cert ca_dir=/etc/rhsm/ca/ insecure=False This system is not registered with an entitlement server. You can use subscription-manager to register. 2018-05-01 18:45:07,566+0800 INFO otopi.context context.runSequence:741 Stage: Programs detection 2018-05-01 18:45:07,567+0800 DEBUG otopi.context context.runSequence:745 STAGE programs 2018-05-01 18:45:07,567+0800 DEBUG otopi.context context._executeMethod:128 Stage programs METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._check_NM 2018-05-01 18:45:07,568+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:45:07,569+0800 DEBUG otopi.context context._executeMethod:128 Stage programs METHOD otopi.plugins.otopi.system.command.Plugin._programs 2018-05-01 18:45:07,570+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:07,570+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/chkconfig=str:'/usr/sbin/chkconfig' 2018-05-01 18:45:07,570+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/chronyc=str:'/usr/bin/chronyc' 2018-05-01 18:45:07,570+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/date=str:'/usr/bin/date' 2018-05-01 18:45:07,571+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/dig=str:'/usr/bin/dig' 2018-05-01 18:45:07,571+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/firewall-cmd=str:'/usr/bin/firewall-cmd' 2018-05-01 18:45:07,571+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/genisoimage=str:'/usr/bin/genisoimage' 2018-05-01 18:45:07,571+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/hwclock=str:'/usr/sbin/hwclock' 2018-05-01 18:45:07,571+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ip=str:'/usr/sbin/ip' 2018-05-01 18:45:07,571+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ntpq=str:'/usr/sbin/ntpq' 2018-05-01 18:45:07,571+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ping=str:'/usr/bin/ping' 2018-05-01 18:45:07,571+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/reboot=str:'/usr/sbin/reboot' 2018-05-01 18:45:07,572+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/service=str:'/usr/sbin/service' 2018-05-01 18:45:07,572+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ssh-keygen=str:'/usr/bin/ssh-keygen' 2018-05-01 18:45:07,572+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/systemctl=str:'/usr/bin/systemctl' 2018-05-01 18:45:07,572+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:07,573+0800 DEBUG otopi.context context._executeMethod:128 Stage programs METHOD otopi.plugins.otopi.services.openrc.Plugin._programs 2018-05-01 18:45:07,575+0800 DEBUG otopi.context context._executeMethod:128 Stage programs METHOD otopi.plugins.otopi.services.rhel.Plugin._programs 2018-05-01 18:45:07,575+0800 DEBUG otopi.plugins.otopi.services.rhel plugin.executeRaw:813 execute: ('/usr/bin/systemctl', 'show-environment'), executable='None', cwd='None', env=None 2018-05-01 18:45:07,638+0800 DEBUG otopi.plugins.otopi.services.rhel plugin.executeRaw:863 execute-result: ('/usr/bin/systemctl', 'show-environment'), rc=0 2018-05-01 18:45:07,638+0800 DEBUG otopi.plugins.otopi.services.rhel plugin.execute:921 execute-output: ('/usr/bin/systemctl', 'show-environment') stdout: LANG=en_US.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin 2018-05-01 18:45:07,638+0800 DEBUG otopi.plugins.otopi.services.rhel plugin.execute:926 execute-output: ('/usr/bin/systemctl', 'show-environment') stderr: 2018-05-01 18:45:07,640+0800 DEBUG otopi.context context._executeMethod:128 Stage programs METHOD otopi.plugins.otopi.services.systemd.Plugin._programs 2018-05-01 18:45:07,640+0800 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:813 execute: ('/usr/bin/systemctl', 'show-environment'), executable='None', cwd='None', env=None 2018-05-01 18:45:07,732+0800 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:863 execute-result: ('/usr/bin/systemctl', 'show-environment'), rc=0 2018-05-01 18:45:07,732+0800 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:921 execute-output: ('/usr/bin/systemctl', 'show-environment') stdout: LANG=en_US.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin 2018-05-01 18:45:07,732+0800 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:926 execute-output: ('/usr/bin/systemctl', 'show-environment') stderr: 2018-05-01 18:45:07,732+0800 DEBUG otopi.plugins.otopi.services.systemd systemd._programs:49 registering systemd provider 2018-05-01 18:45:07,734+0800 INFO otopi.context context.runSequence:741 Stage: Environment setup 2018-05-01 18:45:07,734+0800 DEBUG otopi.context context.runSequence:745 STAGE late_setup 2018-05-01 18:45:07,735+0800 DEBUG otopi.context context._executeMethod:128 Stage late_setup METHOD otopi.plugins.gr_he_common.core.vdsmconf.Plugin._late_setup 2018-05-01 18:45:07,735+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:45:07,736+0800 DEBUG otopi.context context._executeMethod:128 Stage late_setup METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._late_setup 2018-05-01 18:45:07,736+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:45:07,738+0800 DEBUG otopi.context context._executeMethod:128 Stage late_setup METHOD otopi.plugins.gr_he_common.vm.misc.Plugin._late_setup 2018-05-01 18:45:07,738+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:45:07,739+0800 INFO otopi.context context.runSequence:741 Stage: Environment customization 2018-05-01 18:45:07,739+0800 DEBUG otopi.context context.runSequence:745 STAGE customization 2018-05-01 18:45:07,740+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.otopi.network.firewalld.Plugin._customization 2018-05-01 18:45:07,740+0800 DEBUG otopi.plugins.otopi.services.systemd systemd.exists:73 check if service firewalld exists 2018-05-01 18:45:07,740+0800 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:813 execute: ('/usr/bin/systemctl', 'show', '-p', 'LoadState', 'firewalld.service'), executable='None', cwd='None', env=None 2018-05-01 18:45:07,800+0800 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:863 execute-result: ('/usr/bin/systemctl', 'show', '-p', 'LoadState', 'firewalld.service'), rc=0 2018-05-01 18:45:07,800+0800 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:921 execute-output: ('/usr/bin/systemctl', 'show', '-p', 'LoadState', 'firewalld.service') stdout: LoadState=loaded 2018-05-01 18:45:07,800+0800 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:926 execute-output: ('/usr/bin/systemctl', 'show', '-p', 'LoadState', 'firewalld.service') stderr: 2018-05-01 18:45:07,803+0800 DEBUG otopi.plugins.otopi.network.firewalld firewalld._get_firewalld_cmd_version:105 firewalld version: 0.4.4.4 2018-05-01 18:45:07,804+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:07,804+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/firewalldAvailable=bool:'True' 2018-05-01 18:45:07,805+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:07,805+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.otopi.core.config.Plugin._customize1 2018-05-01 18:45:07,807+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.core.titles.Plugin._storage_start 2018-05-01 18:45:07,807+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND 2018-05-01 18:45:07,807+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND --== STORAGE CONFIGURATION ==-- 2018-05-01 18:45:07,807+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND 2018-05-01 18:45:07,809+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.otopi.dialog.cli.Plugin._customize 2018-05-01 18:45:07,809+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:45:07,811+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.core.remote_answerfile.Plugin._customization 2018-05-01 18:45:07,811+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:45:07,812+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.core.titles.Plugin._storage_end 2018-05-01 18:45:07,814+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.engine.ca.Plugin._validation 2018-05-01 18:45:07,814+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:45:07,815+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.core.titles.Plugin._network_start 2018-05-01 18:45:07,816+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND 2018-05-01 18:45:07,816+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND --== HOST NETWORK CONFIGURATION ==-- 2018-05-01 18:45:07,816+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND 2018-05-01 18:45:07,817+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._detect_bridges 2018-05-01 18:45:07,819+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.network.gateway.Plugin._customization 2018-05-01 18:45:07,819+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query OVEHOSTED_GATEWAY 2018-05-01 18:45:07,819+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Please indicate a pingable gateway IP address [X.X.X.X]: 2018-05-01 18:45:12,300+0800 DEBUG otopi.plugins.gr_he_common.network.gateway plugin.executeRaw:813 execute: ('/usr/bin/ping', '-c', '1', 'X.X.X.X'), executable='None', cwd='None', env=None 2018-05-01 18:45:12,387+0800 DEBUG otopi.plugins.gr_he_common.network.gateway plugin.executeRaw:863 execute-result: ('/usr/bin/ping', '-c', '1', 'X.X.X.X'), rc=0 2018-05-01 18:45:12,387+0800 DEBUG otopi.plugins.gr_he_common.network.gateway plugin.execute:921 execute-output: ('/usr/bin/ping', '-c', '1', 'X.X.X.X') stdout: PING X.X.X.X(X.X.X.X) 56(84) bytes of data. 64 bytes from X.X.X.X: icmp_seq=1 ttl=255 time=5.73 ms --- X.X.X.X ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 5.734/5.734/5.734/0.000 ms 2018-05-01 18:45:12,387+0800 DEBUG otopi.plugins.gr_he_common.network.gateway plugin.execute:926 execute-output: ('/usr/bin/ping', '-c', '1', 'X.X.X.X') stderr: 2018-05-01 18:45:12,388+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:12,388+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/gateway=str:'X.X.X.X' 2018-05-01 18:45:12,389+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/OVEHOSTED_GATEWAY=str:'X.X.X.X' 2018-05-01 18:45:12,389+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:12,389+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._customization 2018-05-01 18:45:12,419+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:153 ansible-playbook: cmd: ['/bin/ansible-playbook', '--module-path=/usr/share/ovirt-hosted-engine-setup/ansible', '--inventory=localhost,', '--extra-vars=@/tmp/tmpLG2QGU', '/usr/share/ovirt-hosted-engine-setup/ansible/get_network_interfaces.yml'] 2018-05-01 18:45:12,419+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:154 ansible-playbook: out_path: /tmp/tmpstK6Ep 2018-05-01 18:45:12,419+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:155 ansible-playbook: vars_path: /tmp/tmpLG2QGU 2018-05-01 18:45:12,419+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:156 ansible-playbook: env: {'HE_ANSIBLE_LOG_PATH': '/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-get_network_interfaces-20180501184512-qcohcb.log', 'LESSOPEN': '||/usr/bin/lesspipe.sh %s', 'SSH_CLIENT': 'X.X.X.X 1629 22', 'LOGNAME': 'root', 'USER': 'root', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/Arcconf:/root/bin:/usr/Arcconf', 'HOME': '/root', 'GUESTFISH_RESTORE': '\\e[0m', 'GUESTFISH_INIT': '\\e[1;34m', 'LANG': 'en_US.UTF-8', 'TERM': 'screen', 'SHELL': '/bin/bash', 'SHLVL': '2', 'PWD': '/root', 'HISTSIZE': '1000', 'OTOPI_CALLBACK_OF': '/tmp/tmpstK6Ep', 'XDG_RUNTIME_DIR': '/run/user/0', 'GUESTFISH_PS1': '\\[\\e[1;32m\\]>\\[\\e[0;31m\\] ', 'ANSIBLE_STDOUT_CALLBACK': '1_otopi_json', 'PYTHONPATH': '/usr/share/ovirt-hosted-engine-setup/scripts/..:', 'MAIL': '/var/spool/mail/root', 'ANSIBLE_CALLBACK_WHITELIST': '1_otopi_json,2_ovirt_logger', 'XDG_SESSION_ID': '461', 'STY': '9512.pts-0.STORAGE', 'TERMCAP': 'SC|screen|VT 100/ANSI X3.64 virtual terminal:\\\n\t:DO=\\E[%dB:LE=\\E[%dD:RI=\\E[%dC:UP=\\E[%dA:bs:bt=\\E[Z:\\\n\t:cd=\\E[J:ce=\\E[K:cl=\\E[H\\E[J:cm=\\E[%i%d;%dH:ct=\\E[3g:\\\n\t:do=^J:nd=\\E[C:pt:rc=\\E8:rs=\\Ec:sc=\\E7:st=\\EH:up=\\EM:\\\n\t:le=^H:bl=^G:cr=^M:it#8:ho=\\E[H:nw=\\EE:ta=^I:is=\\E)0:\\\n\t:li#63:co#237:am:xn:xv:LP:sr=\\EM:al=\\E[L:AL=\\E[%dL:\\\n\t:cs=\\E[%i%d;%dr:dl=\\E[M:DL=\\E[%dM:dc=\\E[P:DC=\\E[%dP:\\\n\t:im=\\E[4h:ei=\\E[4l:mi:IC=\\E[%d@:ks=\\E[?1h\\E=:\\\n\t:ke=\\E[?1l\\E>:vi=\\E[?25l:ve=\\E[34h\\E[?25h:vs=\\E[34l:\\\n\t:ti=\\E[?1049h:te=\\E[?1049l:us=\\E[4m:ue=\\E[24m:so=\\E[3m:\\\n\t:se=\\E[23m:mb=\\E[5m:md=\\E[1m:mr=\\E[7m:me=\\E[m:ms:\\\n\t:Co#8:pa#64:AF=\\E[3%dm:AB=\\E[4%dm:op=\\E[39;49m:AX:\\\n\t:vb=\\Eg:G0:as=\\E(0:ae=\\E(B:\\\n\t:ac=\\140\\140aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~..--++,,hhII00:\\\n\t:po=\\E[5i:pf=\\E[4i:Km=\\E[M:k0=\\E[10~:k1=\\EOP:k2=\\EOQ:\\\n\t:k3=\\EOR:k4=\\EOS:k5=\\E[15~:k6=\\E[17~:k7=\\E[18~:\\\n\t:k8=\\E[19~:k9=\\E[20~:k;=\\E[21~:F1=\\E[23~:F2=\\E[24~:\\\n\t:F3=\\E[1;2P:F4=\\E[1;2Q:F5=\\E[1;2R:F6=\\E[1;2S:\\\n\t:F7=\\E[15;2~:F8=\\E[17;2~:F9=\\E[18;2~:FA=\\E[19;2~:kb=\x7f:\\\n\t:K2=\\EOE:kB=\\E[Z:kF=\\E[1;2B:kR=\\E[1;2A:*4=\\E[3;2~:\\\n\t:*7=\\E[1;2F:#2=\\E[1;2H:#3=\\E[2;2~:#4=\\E[1;2D:%c=\\E[6;2~:\\\n\t:%e=\\E[5;2~:%i=\\E[1;2C:kh=\\E[1~:@1=\\E[1~:kH=\\E[4~:\\\n\t:@7=\\E[4~:kN=\\E[6~:kP=\\E[5~:kI=\\E[2~:kD=\\E[3~:ku=\\EOA:\\\n\t:kd=\\EOB:kr=\\EOC:kl=\\EOD:km:', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:', 'GUESTFISH_OUTPUT': '\\e[0m', 'SSH_TTY': '/dev/pts/0', 'HOSTNAME': 'STORAGE', 'HISTCONTROL': 'ignoredups', 'WINDOW': '0', 'OTOPI_LOGFILE': '/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log', 'SSH_CONNECTION': 'X.X.X.X 1629 X.X.X.X 22', 'OTOPI_EXECDIR': '/root'} 2018-05-01 18:45:13,730+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY [Network interfaces] 2018-05-01 18:45:13,831+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Gathering Facts] 2018-05-01 18:45:15,335+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:45:15,837+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Detecting interface on existing management bridge] 2018-05-01 18:45:16,239+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 skipping: [localhost] 2018-05-01 18:45:16,541+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:45:16,942+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 bridge_interface: VARIABLE IS NOT DEFINED! 2018-05-01 18:45:17,244+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Get all active network interfaces] 2018-05-01 18:45:19,349+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:45:19,651+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 valid_network_interfaces: {'msg': u'All items completed', 'changed': False, 'results': [{'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'vnet0', 'changed': False, '_ansible_ignore_errors': None}, {'changed': False, '_ansible_no_log': False, 'failed': False, '_ansible_item_result': True, 'item': u'bond0', 'ansible_facts': {u'type': u'bonding', u'otopi_net_host': u'bond0', u'bond_valid_name': u'bond0'}, '_ansible_ignore_errors': None}, {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'lo', 'changed': False, '_ansible_ignore_errors': None}, {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'enp6s0f0', 'changed': False, '_ansible_ignore_errors': None}, {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'enp6s0f1', 'changed': False, '_ansible_ignore_errors': None}, {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'enp6s0f2', 'changed': False, '_ansible_ignore_errors': None}, {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'enp6s0f3', 'changed': False, '_ansible_ignore_errors': None}, {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'virbr0_nic', 'changed': False, '_ansible_ignore_errors': None}, {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'virbr0', 'changed': False, '_ansible_ignore_errors': None}]} 2018-05-01 18:45:20,052+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Filter bonds with bad naming] 2018-05-01 18:45:20,855+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:45:21,257+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 bb_filtered_list: {'msg': u'All items completed', 'changed': False, 'results': [{'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'vnet0', 'changed': False, '_ansible_ignore_errors': None}, 'changed': False, '_ansible_ignore_errors': None}, {'changed': False, '_ansible_no_log': False, 'failed': False, '_ansible_item_result': True, 'item': {'changed': False, '_ansible_no_log': False, 'item': u'bond0', '_ansible_item_result': True, 'failed': False, 'ansible_facts': {u'otopi_net_host': u'bond0', u'type': u'bonding', u'bond_valid_name': u'bond0'}, '_ansible_ignore_errors': None}, 'ansible_facts': {u'net_iface': {'changed': False, '_ansible_no_log': False, 'failed': False, '_ansible_item_result': True, 'item': u'bond0', 'ansible_facts': {u'type': u'bonding', u'otopi_net_host': u'bond0', u'bond_valid_name': u'bond0'}, '_ansible_ignore_errors': None}}, '_ansible_ignore_errors': None}, {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'lo', 'changed': False, '_ansible_ignore_errors': None}, 'changed': False, '_ansible_ignore_errors': None}, {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'enp6s0f0', 'changed': False, '_ansible_ignore_errors': None}, 'changed': False, '_ansible_ignore_errors': None}, {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'enp6s0f1', 'changed': False, '_ansible_ignore_errors': None}, 'changed': False, '_ansible_ignore_errors': None}, {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'enp6s0f2', 'changed': False, '_ansible_ignore_errors': None}, 'changed': False, '_ansible_ignore_errors': None}, {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'enp6s0f3', 'changed': False, '_ansible_ignore_errors': None}, 'changed': False, '_ansible_ignore_errors': None}, {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'virbr0_nic', 'changed': False, '_ansible_ignore_errors': None}, 'changed': False, '_ansible_ignore_errors': None}, {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'virbr0', 'changed': False, '_ansible_ignore_errors': None}, 'changed': False, '_ansible_ignore_errors': None}]} 2018-05-01 18:45:21,558+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Generate output list] 2018-05-01 18:45:21,960+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:45:22,262+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:45:22,563+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 otopi_host_net: [u'bond0'] 2018-05-01 18:45:22,764+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180 ansible-playbook rc: 0 2018-05-01 18:45:22,765+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [localhost] : ok: 8 changed: 0 unreachable: 0 skipped: 1 failed: 0 2018-05-01 18:45:22,765+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187 ansible-playbook stdout: 2018-05-01 18:45:22,766+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190 ansible-playbook stderr: 2018-05-01 18:45:22,766+0800 DEBUG otopi.plugins.gr_he_common.network.bridge bridge._customization:160 {u'otopi_host_net': {u'changed': False, u'ansible_facts': {u'otopi_host_net': [u'bond0']}, u'_ansible_no_log': False}} 2018-05-01 18:45:22,767+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query ovehosted_bridge_if 2018-05-01 18:45:22,768+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Please indicate a nic to set ovirtmgmt bridge on: (bond0) [bond0]: 2018-05-01 18:45:26,028+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:26,028+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/bridgeIf=str:'bond0' 2018-05-01 18:45:26,030+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_bridge_if=str:'bond0' 2018-05-01 18:45:26,030+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:26,033+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._get_existing_bridge_interface 2018-05-01 18:45:26,033+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:45:26,038+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.core.titles.Plugin._network_end 2018-05-01 18:45:26,042+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.core.titles.Plugin._vm_start 2018-05-01 18:45:26,043+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND 2018-05-01 18:45:26,044+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND --== VM CONFIGURATION ==-- 2018-05-01 18:45:26,044+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND 2018-05-01 18:45:26,049+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._customization_ansible 2018-05-01 18:45:26,049+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query OVEHOSTED_VMENV_OVF_ANSIBLE 2018-05-01 18:45:26,050+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND If you want to deploy with a custom engine appliance image, 2018-05-01 18:45:26,050+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND please specify the path to the OVA archive you would like to use 2018-05-01 18:45:26,051+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND (leave it empty to skip, the setup will use ovirt-engine-appliance rpm installing it if missing): 2018-05-01 18:45:40,579+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:45:40,580+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/applianceMem=int:'16384' 2018-05-01 18:45:40,580+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/applianceVCpus=str:'4' 2018-05-01 18:45:40,581+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/ovfArchive=str:'' 2018-05-01 18:45:40,581+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/OVEHOSTED_VMENV_OVF_ANSIBLE=str:'' 2018-05-01 18:45:40,582+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:45:40,584+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._customization 2018-05-01 18:45:40,585+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:45:40,589+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._customization 2018-05-01 18:45:40,590+0800 INFO otopi.plugins.gr_he_common.vm.cloud_init cloud_init._get_host_tz:363 Detecting host timezone. 2018-05-01 18:45:40,592+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init dialog.queryEnvKey:90 queryEnvKey called for key None 2018-05-01 18:45:40,592+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query CI_INSTANCE_HOSTNAME 2018-05-01 18:45:40,593+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Please provide the FQDN you would like to use for the engine appliance. 2018-05-01 18:45:40,593+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Note: This will be the FQDN of the engine VM you are now going to launch, 2018-05-01 18:45:40,594+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND it should not point to the base host or to any other existing machine. 2018-05-01 18:45:40,594+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Engine VM FQDN: []: 2018-05-01 18:46:24,367+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:RECEIVE ovirt-engine-2.xxxx.net 2018-05-01 18:46:24,368+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init hostname._validateFQDNresolvability:261 ovirt-engine-2.xxxx.net resolves to: set(['192.168.122.32']) 2018-05-01 18:46:24,369+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.executeRaw:813 execute: ('/usr/sbin/ip', 'addr'), executable='None', cwd='None', env=None 2018-05-01 18:46:24,382+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.executeRaw:863 execute-result: ('/usr/sbin/ip', 'addr'), rc=0 2018-05-01 18:46:24,383+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.execute:921 execute-output: ('/usr/sbin/ip', 'addr') stdout: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp6s0f0: mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 00:25:b3:00:7e:e8 brd ff:ff:ff:ff:ff:ff 3: enp6s0f1: mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 00:25:b3:00:7e:e8 brd ff:ff:ff:ff:ff:ff 4: enp6s0f2: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 00:25:b3:00:7e:ea brd ff:ff:ff:ff:ff:ff 5: enp6s0f3: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 00:25:b3:00:7e:eb brd ff:ff:ff:ff:ff:ff 6: bond0: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 00:25:b3:00:7e:e8 brd ff:ff:ff:ff:ff:ff inet X.X.X.X/27 brd X.X.X.X scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::225:b3ff:fe00:7ee8/64 scope link valid_lft forever preferred_lft forever 22: virbr0: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 52:54:00:f2:38:2e brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 23: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:f2:38:2e brd ff:ff:ff:ff:ff:ff 25: vnet0: mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN qlen 1000 link/ether fe:16:3e:56:8b:e1 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe56:8be1/64 scope link valid_lft forever preferred_lft forever 2018-05-01 18:46:24,383+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.execute:926 execute-output: ('/usr/sbin/ip', 'addr') stderr: 2018-05-01 18:46:24,384+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init hostname.getLocalAddresses:223 addresses: [u'X.X.X.X', u'127.0.0.1', u'192.168.122.1'] 2018-05-01 18:46:24,385+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query CI_INSTANCE_DOMAINNAME 2018-05-01 18:46:24,386+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Please provide the domain name you would like to use for the engine appliance. 2018-05-01 18:46:24,386+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Engine VM domain: [xxxx.net] 2018-05-01 18:46:30,192+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query CI_ROOT_PASSWORD 2018-05-01 18:46:30,193+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Enter root password that will be used for the engine appliance: 2018-05-01 18:46:34,911+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query CI_ROOT_PASSWORD 2018-05-01 18:46:34,912+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Confirm appliance root password: 2018-05-01 18:46:38,599+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query CI_ROOT_SSH_PUBKEY 2018-05-01 18:46:38,600+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Enter ssh public key for the root user that will be used for the engine appliance (leave it empty to skip): 2018-05-01 18:46:40,175+0800 WARNING otopi.plugins.gr_he_common.vm.cloud_init cloud_init._customization:771 Skipping appliance root ssh public key 2018-05-01 18:46:40,176+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init dialog.queryEnvKey:90 queryEnvKey called for key OVEHOSTED_VM/rootSshAccess 2018-05-01 18:46:40,177+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query CI_ROOT_SSH_ACCESS 2018-05-01 18:46:40,177+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Do you want to enable ssh access for the root user (yes, no, without-password) [yes]: 2018-05-01 18:46:44,680+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:46:44,681+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/clusterName=str:'Default' 2018-05-01 18:46:44,682+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/automateVMShutdown=bool:'True' 2018-05-01 18:46:44,682+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudInitISO=str:'generate' 2018-05-01 18:46:44,682+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitExecuteEngineSetup=bool:'True' 2018-05-01 18:46:44,683+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitInstanceDomainName=str:'xxxx.net' 2018-05-01 18:46:44,683+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitInstanceHostName=str:'ovirt-engine-2.xxxx.net' 2018-05-01 18:46:44,684+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitRootPwd=str:'**FILTERED**' 2018-05-01 18:46:44,684+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMTZ=str:'Asia/Hong_Kong' 2018-05-01 18:46:44,685+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/rootSshAccess=str:'yes' 2018-05-01 18:46:44,685+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/rootSshPubkey=str:'' 2018-05-01 18:46:44,685+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_INSTANCE_DOMAINNAME=str:'xxxx.net' 2018-05-01 18:46:44,686+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_INSTANCE_HOSTNAME=str:'ovirt-engine-2.xxxx.net' 2018-05-01 18:46:44,686+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_ROOT_PASSWORD=str:'**FILTERED**' 2018-05-01 18:46:44,687+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_ROOT_SSH_ACCESS=str:'yes' 2018-05-01 18:46:44,687+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_ROOT_SSH_PUBKEY=str:'' 2018-05-01 18:46:44,687+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/2/CI_ROOT_PASSWORD=str:'**FILTERED**' 2018-05-01 18:46:44,688+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:46:44,690+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.vm.cpu.Plugin._customization 2018-05-01 18:46:44,692+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query ovehosted_vmenv_cpu 2018-05-01 18:46:44,692+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Please specify the number of virtual CPUs for the VM (Defaults to appliance OVF value): [4]: 2018-05-01 18:46:49,072+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:46:49,072+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/maxVCpus=str:'8' 2018-05-01 18:46:49,073+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmVCpus=str:'4' 2018-05-01 18:46:49,073+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_vmenv_cpu=str:'4' 2018-05-01 18:46:49,074+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:46:49,076+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.vm.image.Plugin._disk_customization 2018-05-01 18:46:49,077+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:46:49,081+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.vm.memory.Plugin._customization 2018-05-01 18:46:49,083+0800 DEBUG otopi.plugins.gr_he_common.vm.memory dialog.queryEnvKey:90 queryEnvKey called for key OVEHOSTED_VM/vmMemSizeMB 2018-05-01 18:46:49,083+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query ovehosted_vmenv_mem 2018-05-01 18:46:49,084+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Please specify the memory size of the VM in MB (Defaults to appliance OVF value): [16384]: 2018-05-01 18:46:53,318+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:RECEIVE 4096 2018-05-01 18:46:53,320+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:46:53,321+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmMemSizeMB=int:'4096' 2018-05-01 18:46:53,321+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_vmenv_mem=str:'4096' 2018-05-01 18:46:53,322+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:46:53,324+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.vm.mac.Plugin._customization 2018-05-01 18:46:53,325+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query ovehosted_vmenv_mac 2018-05-01 18:46:53,325+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:22:82:01]: 2018-05-01 18:46:54,136+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:46:54,136+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmMACAddr=str:'00:16:3e:22:82:01' 2018-05-01 18:46:54,137+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_vmenv_mac=str:'00:16:3e:22:82:01' 2018-05-01 18:46:54,137+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:46:54,140+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._customize_vm_networking 2018-05-01 18:46:54,140+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init cloud_init._getMyIPAddress:115 Acquiring 'bond0' address 2018-05-01 18:46:54,141+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.executeRaw:813 execute: ('/usr/sbin/ip', 'addr', 'show', 'bond0'), executable='None', cwd='None', env=None 2018-05-01 18:46:54,154+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.executeRaw:863 execute-result: ('/usr/sbin/ip', 'addr', 'show', 'bond0'), rc=0 2018-05-01 18:46:54,155+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.execute:921 execute-output: ('/usr/sbin/ip', 'addr', 'show', 'bond0') stdout: 6: bond0: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 00:25:b3:00:7e:e8 brd ff:ff:ff:ff:ff:ff inet X.X.X.X/27 brd X.X.X.X scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::225:b3ff:fe00:7ee8/64 scope link valid_lft forever preferred_lft forever 2018-05-01 18:46:54,155+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.execute:926 execute-output: ('/usr/sbin/ip', 'addr', 'show', 'bond0') stderr: 2018-05-01 18:46:54,156+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init cloud_init._getMyIPAddress:132 address: X.X.X.X/27 2018-05-01 18:46:54,158+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query CI_VM_STATIC_NETWORKING 2018-05-01 18:46:54,158+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND How should the engine VM network be configured (DHCP, Static)[DHCP]? 2018-05-01 18:47:00,830+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:RECEIVE Static 2018-05-01 18:47:00,831+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.executeRaw:813 execute: ('/usr/bin/ping', '-c', '1', 'X.X.X.X'), executable='None', cwd='None', env=None 2018-05-01 18:47:00,936+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.executeRaw:863 execute-result: ('/usr/bin/ping', '-c', '1', 'X.X.X.X'), rc=0 2018-05-01 18:47:00,937+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.execute:921 execute-output: ('/usr/bin/ping', '-c', '1', 'X.X.X.X') stdout: PING X.X.X.X (X.X.X.X) 56(84) bytes of data. 64 bytes from X.X.X.X: icmp_seq=1 ttl=255 time=0.867 ms --- X.X.X.X ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.867/0.867/0.867/0.000 ms 2018-05-01 18:47:00,937+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.execute:926 execute-output: ('/usr/bin/ping', '-c', '1', 'X.X.X.X') stderr: 2018-05-01 18:47:00,938+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.executeRaw:813 execute: ('/usr/bin/ping', '-c', '1', 'X.X.X.X'), executable='None', cwd='None', env=None 2018-05-01 18:47:01,042+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.executeRaw:863 execute-result: ('/usr/bin/ping', '-c', '1', 'X.X.X.X'), rc=0 2018-05-01 18:47:01,043+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.execute:921 execute-output: ('/usr/bin/ping', '-c', '1', 'X.X.X.X') stdout: PING X.X.X.X (X.X.X.X) 56(84) bytes of data. 64 bytes from X.X.X.X: icmp_seq=1 ttl=64 time=0.640 ms --- X.X.X.X ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 2018-05-01 18:47:01,044+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.execute:926 execute-output: ('/usr/bin/ping', '-c', '1', 'X.X.X.X') stderr: 2018-05-01 18:47:01,044+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.executeRaw:813 execute: ('/usr/bin/ping', '-c', '1', 'X.X.X.X'), executable='None', cwd='None', env=None 2018-05-01 18:47:04,154+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.executeRaw:863 execute-result: ('/usr/bin/ping', '-c', '1', 'X.X.X.X'), rc=1 2018-05-01 18:47:04,155+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.execute:921 execute-output: ('/usr/bin/ping', '-c', '1', 'X.X.X.X') stdout: PING X.X.X.X (X.X.X.X) 56(84) bytes of data. From X.X.X.X icmp_seq=1 Destination Host Unreachable --- X.X.X.X ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms 2018-05-01 18:47:04,156+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.execute:926 execute-output: ('/usr/bin/ping', '-c', '1', 'X.X.X.X') stderr: 2018-05-01 18:47:04,157+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query CLOUDINIT_VM_STATIC_IP_ADDRESS 2018-05-01 18:47:04,157+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Please enter the IP address to be used for the engine VM [X.X.X.X]: 2018-05-01 18:47:19,829+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:RECEIVE X.X.X.X 2018-05-01 18:47:19,831+0800 INFO otopi.plugins.gr_he_common.vm.cloud_init cloud_init._customize_vm_addressing:292 The engine VM will be configured to use X.X.X.X/27 2018-05-01 18:47:19,833+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query CI_DNS 2018-05-01 18:47:19,834+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Please provide a comma-separated list (max 3) of IP addresses of domain name servers for the engine VM 2018-05-01 18:47:19,834+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Engine VM DNS (leave it empty to skip) [X.X.X.X8,X.X.X.X7]: 2018-05-01 18:47:24,110+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query CI_VM_ETC_HOST 2018-05-01 18:47:24,111+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Add lines for the appliance itself and for this host to /etc/hosts on the engine VM? 2018-05-01 18:47:24,111+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Note: ensuring that this host could resolve the engine VM hostname is still up to you 2018-05-01 18:47:24,112+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND (Yes, No)[No] 2018-05-01 18:47:31,228+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:RECEIVE No 2018-05-01 18:47:31,229+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init cloud_init._getMyIPAddress:115 Acquiring 'bond0' address 2018-05-01 18:47:31,229+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.executeRaw:813 execute: ('/usr/sbin/ip', 'addr', 'show', 'bond0'), executable='None', cwd='None', env=None 2018-05-01 18:47:31,242+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.executeRaw:863 execute-result: ('/usr/sbin/ip', 'addr', 'show', 'bond0'), rc=0 2018-05-01 18:47:31,244+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.execute:921 execute-output: ('/usr/sbin/ip', 'addr', 'show', 'bond0') stdout: 6: bond0: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 00:25:b3:00:7e:e8 brd ff:ff:ff:ff:ff:ff inet X.X.X.X/27 brd X.X.X.X scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::225:b3ff:fe00:7ee8/64 scope link valid_lft forever preferred_lft forever 2018-05-01 18:47:31,244+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.execute:926 execute-output: ('/usr/sbin/ip', 'addr', 'show', 'bond0') stderr: 2018-05-01 18:47:31,244+0800 DEBUG otopi.plugins.gr_he_common.vm.cloud_init cloud_init._getMyIPAddress:132 address: X.X.X.X/27 2018-05-01 18:47:31,247+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:47:31,248+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitHostIP=str:'X.X.X.X' 2018-05-01 18:47:31,248+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMDNS=str:'X.X.X.X8,X.X.X.X7' 2018-05-01 18:47:31,248+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMETCHOSTS=bool:'False' 2018-05-01 18:47:31,249+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMStaticCIDR=str:'X.X.X.X/27' 2018-05-01 18:47:31,249+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_DNS=str:'X.X.X.X8,X.X.X.X7' 2018-05-01 18:47:31,250+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_VM_ETC_HOST=str:'no' 2018-05-01 18:47:31,250+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_VM_STATIC_NETWORKING=str:'static' 2018-05-01 18:47:31,251+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CLOUDINIT_VM_STATIC_IP_ADDRESS=str:'X.X.X.X' 2018-05-01 18:47:31,251+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:47:31,253+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.core.titles.Plugin._vm_end 2018-05-01 18:47:31,258+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.core.titles.Plugin._engine_start 2018-05-01 18:47:31,259+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND 2018-05-01 18:47:31,259+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND --== HOSTED ENGINE CONFIGURATION ==-- 2018-05-01 18:47:31,260+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND 2018-05-01 18:47:31,265+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.core.ha_notifications.Plugin._customization 2018-05-01 18:47:31,267+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query DIALOGOVEHOSTED_NOTIF/smtpServer 2018-05-01 18:47:31,267+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Please provide the name of the SMTP server through which we will send notifications [localhost]: 2018-05-01 18:47:40,748+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query DIALOGOVEHOSTED_NOTIF/smtpPort 2018-05-01 18:47:40,748+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Please provide the TCP port number of the SMTP server [25]: 2018-05-01 18:47:42,180+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query DIALOGOVEHOSTED_NOTIF/sourceEmail 2018-05-01 18:47:42,180+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Please provide the email address from which notifications will be sent [root at localhost]: 2018-05-01 18:47:48,763+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:RECEIVE support at xxxx.net 2018-05-01 18:47:48,763+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query DIALOGOVEHOSTED_NOTIF/destEmail 2018-05-01 18:47:48,764+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Please provide a comma-separated list of email addresses which will get notifications [root at localhost]: 2018-05-01 18:47:52,476+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:RECEIVE support at xxxx.net 2018-05-01 18:47:52,478+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:47:52,478+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/destEmail=str:'support at xxxx.net' 2018-05-01 18:47:52,479+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/smtpPort=str:'25' 2018-05-01 18:47:52,479+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/smtpServer=str:'localhost' 2018-05-01 18:47:52,479+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/sourceEmail=str:'support at xxxx.net' 2018-05-01 18:47:52,481+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DIALOGOVEHOSTED_NOTIF/destEmail=str:'support at xxxx.net' 2018-05-01 18:47:52,481+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpPort=str:'25' 2018-05-01 18:47:52,481+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpServer=str:'localhost' 2018-05-01 18:47:52,482+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DIALOGOVEHOSTED_NOTIF/sourceEmail=str:'support at xxxx.net' 2018-05-01 18:47:52,482+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:47:52,485+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.engine.ca.Plugin._customization 2018-05-01 18:47:52,485+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query ENGINE_ADMIN_PASSWORD 2018-05-01 18:47:52,486+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Enter engine admin password: 2018-05-01 18:47:57,163+0800 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query ENGINE_ADMIN_PASSWORD 2018-05-01 18:47:57,164+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Confirm engine admin password: 2018-05-01 18:48:00,316+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:48:00,317+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/adminPassword=str:'**FILTERED**' 2018-05-01 18:48:00,318+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ENGINE_ADMIN_PASSWORD=str:'**FILTERED**' 2018-05-01 18:48:00,319+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/2/ENGINE_ADMIN_PASSWORD=str:'**FILTERED**' 2018-05-01 18:48:00,319+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:48:00,322+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.engine.fqdn.Plugin._customization 2018-05-01 18:48:00,323+0800 DEBUG otopi.plugins.gr_he_common.engine.fqdn dialog.queryEnvKey:90 queryEnvKey called for key OVEHOSTED_NETWORK/fqdn 2018-05-01 18:48:00,324+0800 DEBUG otopi.plugins.gr_he_common.engine.fqdn hostname._validateFQDNresolvability:261 ovirt-engine-2.xxxx.net resolves to: set(['192.168.122.32']) 2018-05-01 18:48:00,324+0800 DEBUG otopi.plugins.gr_he_common.engine.fqdn plugin.executeRaw:813 execute: ('/usr/sbin/ip', 'addr'), executable='None', cwd='None', env=None 2018-05-01 18:48:00,337+0800 DEBUG otopi.plugins.gr_he_common.engine.fqdn plugin.executeRaw:863 execute-result: ('/usr/sbin/ip', 'addr'), rc=0 2018-05-01 18:48:00,338+0800 DEBUG otopi.plugins.gr_he_common.engine.fqdn plugin.execute:921 execute-output: ('/usr/sbin/ip', 'addr') stdout: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp6s0f0: mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 00:25:b3:00:7e:e8 brd ff:ff:ff:ff:ff:ff 3: enp6s0f1: mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 00:25:b3:00:7e:e8 brd ff:ff:ff:ff:ff:ff 4: enp6s0f2: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 00:25:b3:00:7e:ea brd ff:ff:ff:ff:ff:ff 5: enp6s0f3: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 00:25:b3:00:7e:eb brd ff:ff:ff:ff:ff:ff 6: bond0: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 00:25:b3:00:7e:e8 brd ff:ff:ff:ff:ff:ff inet X.X.X.X/27 brd X.X.X.X scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::225:b3ff:fe00:7ee8/64 scope link valid_lft forever preferred_lft forever 22: virbr0: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 52:54:00:f2:38:2e brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 23: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:f2:38:2e brd ff:ff:ff:ff:ff:ff 25: vnet0: mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN qlen 1000 link/ether fe:16:3e:56:8b:e1 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe56:8be1/64 scope link valid_lft forever preferred_lft forever 2018-05-01 18:48:00,339+0800 DEBUG otopi.plugins.gr_he_common.engine.fqdn plugin.execute:926 execute-output: ('/usr/sbin/ip', 'addr') stderr: 2018-05-01 18:48:00,339+0800 DEBUG otopi.plugins.gr_he_common.engine.fqdn hostname.getLocalAddresses:223 addresses: [u'X.X.X.X', u'127.0.0.1', u'192.168.122.1'] 2018-05-01 18:48:00,341+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:48:00,342+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/fqdn=str:'ovirt-engine-2.xxxx.net' 2018-05-01 18:48:00,343+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:48:00,346+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.gr_he_common.core.titles.Plugin._engine_end 2018-05-01 18:48:00,351+0800 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.otopi.core.config.Plugin._customize2 2018-05-01 18:48:00,354+0800 INFO otopi.context context.runSequence:741 Stage: Setup validation 2018-05-01 18:48:00,355+0800 DEBUG otopi.context context.runSequence:745 STAGE validation 2018-05-01 18:48:00,357+0800 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.otopi.core.misc.Plugin._validation 2018-05-01 18:48:00,357+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:48:00,358+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/aborted=bool:'False' 2018-05-01 18:48:00,358+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/debug=int:'0' 2018-05-01 18:48:00,359+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' 2018-05-01 18:48:00,359+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[]' 2018-05-01 18:48:00,359+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/executionDirectory=str:'/root' 2018-05-01 18:48:00,360+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exitCode=list:'[{'priority': 90001, 'code': 0}]' 2018-05-01 18:48:00,360+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/log=bool:'True' 2018-05-01 18:48:00,360+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/pluginGroups=str:'otopi:gr-he-common:gr-he-ansiblesetup' 2018-05-01 18:48:00,361+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/pluginPath=str:'/usr/share/otopi/plugins:/usr/share/ovirt-hosted-engine-setup/scripts/../plugins' 2018-05-01 18:48:00,361+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/suppressEnvironmentKeys=list:'[]' 2018-05-01 18:48:00,362+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/chkconfig=str:'/usr/sbin/chkconfig' 2018-05-01 18:48:00,362+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/chronyc=str:'/usr/bin/chronyc' 2018-05-01 18:48:00,362+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/date=str:'/usr/bin/date' 2018-05-01 18:48:00,363+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/dig=str:'/usr/bin/dig' 2018-05-01 18:48:00,363+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/firewall-cmd=str:'/usr/bin/firewall-cmd' 2018-05-01 18:48:00,363+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/genisoimage=str:'/usr/bin/genisoimage' 2018-05-01 18:48:00,364+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/hwclock=str:'/usr/sbin/hwclock' 2018-05-01 18:48:00,364+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/initctl=NoneType:'None' 2018-05-01 18:48:00,364+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ip=str:'/usr/sbin/ip' 2018-05-01 18:48:00,365+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ntpq=str:'/usr/sbin/ntpq' 2018-05-01 18:48:00,365+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ping=str:'/usr/bin/ping' 2018-05-01 18:48:00,365+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/rc=NoneType:'None' 2018-05-01 18:48:00,366+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/rc-update=NoneType:'None' 2018-05-01 18:48:00,366+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/reboot=str:'/usr/sbin/reboot' 2018-05-01 18:48:00,366+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/service=str:'/usr/sbin/service' 2018-05-01 18:48:00,367+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ssh-keygen=str:'/usr/bin/ssh-keygen' 2018-05-01 18:48:00,367+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/systemctl=str:'/usr/bin/systemctl' 2018-05-01 18:48:00,368+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/configFileAppend=NoneType:'None' 2018-05-01 18:48:00,368+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/configFileName=str:'/etc/ovirt-hosted-engine-setup.conf' 2018-05-01 18:48:00,368+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True' 2018-05-01 18:48:00,369+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/internalPackageTransaction=Transaction:'transaction' 2018-05-01 18:48:00,369+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logDir=str:'/var/log/ovirt-hosted-engine-setup' 2018-05-01 18:48:00,369+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFileHandle=file:'' 2018-05-01 18:48:00,370+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFileName=str:'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log' 2018-05-01 18:48:00,370+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFileNamePrefix=str:'ovirt-hosted-engine-setup' 2018-05-01 18:48:00,370+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFilter=_MyLoggerFilter:'filter' 2018-05-01 18:48:00,371+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFilterKeys=list:'['OVEHOSTED_ENGINE/adminPassword', 'OVEHOSTED_VM/cloudinitRootPwd']' 2018-05-01 18:48:00,371+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFilterRe=list:'[<_sre.SRE_Pattern object at 0x22e9880>]' 2018-05-01 18:48:00,372+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logRemoveAtExit=bool:'False' 2018-05-01 18:48:00,372+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/mainTransaction=Transaction:'transaction' 2018-05-01 18:48:00,372+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/modifiedFiles=list:'[]' 2018-05-01 18:48:00,373+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/randomizeEvents=bool:'False' 2018-05-01 18:48:00,373+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/answerFile=NoneType:'None' 2018-05-01 18:48:00,373+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/autoAcceptDefault=bool:'False' 2018-05-01 18:48:00,374+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/boundary=str:'--=451b80dc-996f-432e-9e4f-2b29ef6d1141=--' 2018-05-01 18:48:00,374+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/cliVersion=int:'1' 2018-05-01 18:48:00,374+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/customization=bool:'False' 2018-05-01 18:48:00,375+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/dialect=str:'human' 2018-05-01 18:48:00,375+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV INFO/PACKAGE_NAME=str:'otopi' 2018-05-01 18:48:00,375+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV INFO/PACKAGE_VERSION=str:'1.7.7' 2018-05-01 18:48:00,376+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/firewalldAvailable=bool:'True' 2018-05-01 18:48:00,376+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/firewalldDisableServices=list:'[]' 2018-05-01 18:48:00,377+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/firewalldEnable=bool:'False' 2018-05-01 18:48:00,377+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/iptablesEnable=bool:'False' 2018-05-01 18:48:00,377+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/iptablesRules=NoneType:'None' 2018-05-01 18:48:00,378+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/sshEnable=bool:'False' 2018-05-01 18:48:00,378+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/sshKey=NoneType:'None' 2018-05-01 18:48:00,378+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/sshUser=str:'' 2018-05-01 18:48:00,379+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/ansibleDeployment=bool:'True' 2018-05-01 18:48:00,379+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/checkRequirements=bool:'True' 2018-05-01 18:48:00,379+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/deployProceed=bool:'True' 2018-05-01 18:48:00,380+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/etcAnswerFile=str:'/etc/ovirt-hosted-engine/answers.conf' 2018-05-01 18:48:00,380+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/localVMDir=NoneType:'None' 2018-05-01 18:48:00,380+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/miscReached=bool:'False' 2018-05-01 18:48:00,381+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/nodeSetup=bool:'False' 2018-05-01 18:48:00,381+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/rollbackProceed=NoneType:'None' 2018-05-01 18:48:00,381+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/rollbackUpgrade=bool:'False' 2018-05-01 18:48:00,382+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/screenProceed=NoneType:'None' 2018-05-01 18:48:00,382+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/skipTTYCheck=bool:'False' 2018-05-01 18:48:00,383+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/tempDir=str:'/var/tmp' 2018-05-01 18:48:00,383+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/upgradeProceed=NoneType:'None' 2018-05-01 18:48:00,383+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/upgradingAppliance=bool:'False' 2018-05-01 18:48:00,384+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/userAnswerFile=NoneType:'None' 2018-05-01 18:48:00,384+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/adminPassword=str:'**FILTERED**' 2018-05-01 18:48:00,384+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/adminUsername=str:'admin at internal' 2018-05-01 18:48:00,385+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/appHostName=NoneType:'None' 2018-05-01 18:48:00,385+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/clusterName=str:'Default' 2018-05-01 18:48:00,385+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/enableHcGlusterService=NoneType:'None' 2018-05-01 18:48:00,386+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/enableLibgfapi=NoneType:'None' 2018-05-01 18:48:00,386+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/engineSetupTimeout=int:'1800' 2018-05-01 18:48:00,387+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/insecureSSL=NoneType:'None' 2018-05-01 18:48:00,387+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/interactiveAdminPassword=bool:'True' 2018-05-01 18:48:00,387+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/temporaryCertificate=NoneType:'None' 2018-05-01 18:48:00,388+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_FIRST_HOST/deployWithHE35Hosts=NoneType:'None' 2018-05-01 18:48:00,388+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_FIRST_HOST/skipSharedStorageAF=bool:'False' 2018-05-01 18:48:00,388+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/allowInvalidBondModes=bool:'False' 2018-05-01 18:48:00,389+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/bridgeIf=str:'bond0' 2018-05-01 18:48:00,389+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/bridgeName=str:'ovirtmgmt' 2018-05-01 18:48:00,389+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/fqdn=str:'ovirt-engine-2.xxxx.net' 2018-05-01 18:48:00,390+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/fqdnReverseValidation=bool:'False' 2018-05-01 18:48:00,390+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/gateway=str:'X.X.X.X' 2018-05-01 18:48:00,390+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/host_name=NoneType:'None' 2018-05-01 18:48:00,391+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/refuseDeployingWithNM=bool:'False' 2018-05-01 18:48:00,391+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/destEmail=str:'support at xxxx.net' 2018-05-01 18:48:00,392+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/smtpPort=str:'25' 2018-05-01 18:48:00,392+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/smtpServer=str:'localhost' 2018-05-01 18:48:00,392+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/sourceEmail=str:'support at xxxx.net' 2018-05-01 18:48:00,393+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_SANLOCK/lockspaceName=str:'hosted-engine' 2018-05-01 18:48:00,393+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_SANLOCK/serviceName=str:'sanlock' 2018-05-01 18:48:00,393+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/LunID=NoneType:'None' 2018-05-01 18:48:00,394+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/blockDeviceSizeGB=NoneType:'None' 2018-05-01 18:48:00,394+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/domainType=NoneType:'None' 2018-05-01 18:48:00,394+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIDiscoverPassword=NoneType:'None' 2018-05-01 18:48:00,395+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIDiscoverUser=NoneType:'None' 2018-05-01 18:48:00,395+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortal=NoneType:'None' 2018-05-01 18:48:00,395+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortalIPAddress=NoneType:'None' 2018-05-01 18:48:00,396+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortalPassword=NoneType:'None' 2018-05-01 18:48:00,396+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortalPort=NoneType:'None' 2018-05-01 18:48:00,397+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortalUser=NoneType:'None' 2018-05-01 18:48:00,397+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSITargetName=NoneType:'None' 2018-05-01 18:48:00,397+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/imgDesc=str:'Hosted Engine Image' 2018-05-01 18:48:00,398+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/imgSizeGB=NoneType:'None' 2018-05-01 18:48:00,398+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/imgUUID=str:'a8d46c38-af05-4a19-afd9-11e45cb26942' 2018-05-01 18:48:00,398+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/lockspaceImageUUID=NoneType:'None' 2018-05-01 18:48:00,399+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/lockspaceVolumeUUID=NoneType:'None' 2018-05-01 18:48:00,399+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/metadataImageUUID=NoneType:'None' 2018-05-01 18:48:00,399+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/metadataVolumeUUID=NoneType:'None' 2018-05-01 18:48:00,400+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/mntOptions=NoneType:'None' 2018-05-01 18:48:00,400+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/nfsVersion=NoneType:'None' 2018-05-01 18:48:00,400+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/ovfSizeGB=NoneType:'None' 2018-05-01 18:48:00,401+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/storageDomainConnection=NoneType:'None' 2018-05-01 18:48:00,401+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/storageDomainName=str:'hosted_storage' 2018-05-01 18:48:00,402+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/volUUID=str:'4de5c53d-9cd6-4897-b6a4-6ec2c8bbffb3' 2018-05-01 18:48:00,402+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/backupFileName=NoneType:'None' 2018-05-01 18:48:00,402+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/backupImgSizeGB=NoneType:'None' 2018-05-01 18:48:00,403+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/backupImgUUID=NoneType:'None' 2018-05-01 18:48:00,403+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/backupVolUUID=NoneType:'None' 2018-05-01 18:48:00,403+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/createLMVolumes=bool:'False' 2018-05-01 18:48:00,404+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/dstBackupFileName=NoneType:'None' 2018-05-01 18:48:00,404+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/kvmGid=int:'36' 2018-05-01 18:48:00,404+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/serviceName=str:'vdsmd' 2018-05-01 18:48:00,405+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/useSSL=bool:'True' 2018-05-01 18:48:00,405+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/vdscli=NoneType:'None' 2018-05-01 18:48:00,405+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/vdsmUid=int:'36' 2018-05-01 18:48:00,406+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/acceptDownloadEApplianceRPM=NoneType:'None' 2018-05-01 18:48:00,406+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/applianceMem=int:'16384' 2018-05-01 18:48:00,406+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/applianceVCpus=str:'4' 2018-05-01 18:48:00,407+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/applianceVersion=NoneType:'None' 2018-05-01 18:48:00,407+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/automateVMShutdown=bool:'True' 2018-05-01 18:48:00,408+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cdromUUID=str:'f90cef5e-1d7d-47ae-b3f0-34cabe5a9ff3' 2018-05-01 18:48:00,408+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudInitISO=str:'generate' 2018-05-01 18:48:00,408+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitExecuteEngineSetup=bool:'True' 2018-05-01 18:48:00,409+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitHostIP=str:'X.X.X.X' 2018-05-01 18:48:00,409+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitInstanceDomainName=str:'xxxx.net' 2018-05-01 18:48:00,409+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitInstanceHostName=str:'ovirt-engine-2.xxxx.net' 2018-05-01 18:48:00,410+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitRootPwd=str:'**FILTERED**' 2018-05-01 18:48:00,410+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMDNS=str:'X.X.X.X8,X.X.X.X7' 2018-05-01 18:48:00,410+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMETCHOSTS=bool:'False' 2018-05-01 18:48:00,411+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMStaticCIDR=str:'X.X.X.X/27' 2018-05-01 18:48:00,411+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMTZ=str:'Asia/Hong_Kong' 2018-05-01 18:48:00,411+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/consoleUUID=str:'61d2a2c6-bf6c-441b-b1a9-b77c8e4fc814' 2018-05-01 18:48:00,412+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/emulatedMachine=str:'pc' 2018-05-01 18:48:00,412+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/localVmUUID=str:'72921200-b111-4515-96a0-19524dd65141' 2018-05-01 18:48:00,413+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/maxVCpus=str:'8' 2018-05-01 18:48:00,413+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/nicUUID=str:'1b2fd7c7-0200-4bbf-afa8-2337aa0dfac8' 2018-05-01 18:48:00,413+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/ovfArchive=str:'' 2018-05-01 18:48:00,414+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/rootSshAccess=str:'yes' 2018-05-01 18:48:00,414+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/rootSshPubkey=str:'' 2018-05-01 18:48:00,414+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmCDRom=NoneType:'None' 2018-05-01 18:48:00,415+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmMACAddr=str:'00:16:3e:22:82:01' 2018-05-01 18:48:00,415+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmMemSizeMB=int:'4096' 2018-05-01 18:48:00,415+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmVCpus=str:'4' 2018-05-01 18:48:00,416+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/dnfDisabledPlugins=list:'[]' 2018-05-01 18:48:00,416+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/dnfExpireCache=bool:'True' 2018-05-01 18:48:00,416+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/dnfRollback=bool:'True' 2018-05-01 18:48:00,417+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/dnfpackagerEnabled=bool:'True' 2018-05-01 18:48:00,417+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/keepAliveInterval=int:'30' 2018-05-01 18:48:00,418+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumDisabledPlugins=list:'[]' 2018-05-01 18:48:00,418+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumEnabledPlugins=list:'[]' 2018-05-01 18:48:00,418+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumExpireCache=bool:'True' 2018-05-01 18:48:00,419+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumRollback=bool:'True' 2018-05-01 18:48:00,419+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumpackagerEnabled=bool:'True' 2018-05-01 18:48:00,419+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_DNS=str:'X.X.X.X8,X.X.X.X7' 2018-05-01 18:48:00,420+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_INSTANCE_DOMAINNAME=str:'xxxx.net' 2018-05-01 18:48:00,420+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_INSTANCE_HOSTNAME=str:'ovirt-engine-2.xxxx.net' 2018-05-01 18:48:00,420+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_ROOT_PASSWORD=str:'**FILTERED**' 2018-05-01 18:48:00,421+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_ROOT_SSH_ACCESS=str:'yes' 2018-05-01 18:48:00,421+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_ROOT_SSH_PUBKEY=str:'' 2018-05-01 18:48:00,421+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_VM_ETC_HOST=str:'no' 2018-05-01 18:48:00,422+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_VM_STATIC_NETWORKING=str:'static' 2018-05-01 18:48:00,422+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CLOUDINIT_VM_STATIC_IP_ADDRESS=str:'X.X.X.X' 2018-05-01 18:48:00,423+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DEPLOY_PROCEED=str:'yes' 2018-05-01 18:48:00,423+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DIALOGOVEHOSTED_NOTIF/destEmail=str:'support at xxxx.net' 2018-05-01 18:48:00,423+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpPort=str:'25' 2018-05-01 18:48:00,424+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpServer=str:'localhost' 2018-05-01 18:48:00,424+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DIALOGOVEHOSTED_NOTIF/sourceEmail=str:'support at xxxx.net' 2018-05-01 18:48:00,424+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ENGINE_ADMIN_PASSWORD=str:'**FILTERED**' 2018-05-01 18:48:00,425+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/OVEHOSTED_GATEWAY=str:'X.X.X.X' 2018-05-01 18:48:00,425+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/OVEHOSTED_VMENV_OVF_ANSIBLE=str:'' 2018-05-01 18:48:00,425+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_bridge_if=str:'bond0' 2018-05-01 18:48:00,426+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_vmenv_cpu=str:'4' 2018-05-01 18:48:00,426+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_vmenv_mac=str:'00:16:3e:22:82:01' 2018-05-01 18:48:00,426+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_vmenv_mem=str:'4096' 2018-05-01 18:48:00,427+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/2/CI_ROOT_PASSWORD=str:'**FILTERED**' 2018-05-01 18:48:00,427+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/2/ENGINE_ADMIN_PASSWORD=str:'**FILTERED**' 2018-05-01 18:48:00,428+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/clockMaxGap=int:'5' 2018-05-01 18:48:00,428+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/clockSet=bool:'False' 2018-05-01 18:48:00,428+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/commandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/Arcconf:/root/bin:/usr/Arcconf' 2018-05-01 18:48:00,429+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/reboot=bool:'False' 2018-05-01 18:48:00,429+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/rebootAllow=bool:'True' 2018-05-01 18:48:00,429+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/rebootDeferTime=int:'10' 2018-05-01 18:48:00,430+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:48:00,434+0800 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._get_hostname_from_bridge_if 2018-05-01 18:48:00,436+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:48:00,436+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/appHostName=str:'STORAGE' 2018-05-01 18:48:00,437+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/host_name=str:'STORAGE' 2018-05-01 18:48:00,438+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:48:00,441+0800 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.otopi.core.log.Plugin._validation 2018-05-01 18:48:00,441+0800 DEBUG otopi.plugins.otopi.core.log log._validation:384 _filtered_keys_at_setup: ['OVEHOSTED_ENGINE/adminPassword', 'OVEHOSTED_VM/cloudinitRootPwd'] 2018-05-01 18:48:00,441+0800 DEBUG otopi.plugins.otopi.core.log log._validation:388 LOG_FILTER_KEYS: ['OVEHOSTED_ENGINE/adminPassword', 'OVEHOSTED_VM/cloudinitRootPwd'] 2018-05-01 18:48:00,446+0800 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.otopi.network.firewalld.Plugin._validation 2018-05-01 18:48:00,451+0800 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.otopi.network.hostname.Plugin._validation 2018-05-01 18:48:00,451+0800 DEBUG otopi.plugins.otopi.network.hostname hostname._validation:55 my name: STORAGE 2018-05-01 18:48:00,452+0800 DEBUG otopi.plugins.otopi.network.hostname plugin.executeRaw:813 execute: ('/usr/sbin/ip', 'addr', 'show'), executable='None', cwd='None', env=None 2018-05-01 18:48:00,465+0800 DEBUG otopi.plugins.otopi.network.hostname plugin.executeRaw:863 execute-result: ('/usr/sbin/ip', 'addr', 'show'), rc=0 2018-05-01 18:48:00,466+0800 DEBUG otopi.plugins.otopi.network.hostname plugin.execute:921 execute-output: ('/usr/sbin/ip', 'addr', 'show') stdout: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp6s0f0: mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 00:25:b3:00:7e:e8 brd ff:ff:ff:ff:ff:ff 3: enp6s0f1: mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 00:25:b3:00:7e:e8 brd ff:ff:ff:ff:ff:ff 4: enp6s0f2: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 00:25:b3:00:7e:ea brd ff:ff:ff:ff:ff:ff 5: enp6s0f3: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 00:25:b3:00:7e:eb brd ff:ff:ff:ff:ff:ff 6: bond0: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 00:25:b3:00:7e:e8 brd ff:ff:ff:ff:ff:ff inet X.X.X.X/27 brd X.X.X.X scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::225:b3ff:fe00:7ee8/64 scope link valid_lft forever preferred_lft forever 22: virbr0: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 52:54:00:f2:38:2e brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 23: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:f2:38:2e brd ff:ff:ff:ff:ff:ff 25: vnet0: mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN qlen 1000 link/ether fe:16:3e:56:8b:e1 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe56:8be1/64 scope link valid_lft forever preferred_lft forever 2018-05-01 18:48:00,467+0800 DEBUG otopi.plugins.otopi.network.hostname plugin.execute:926 execute-output: ('/usr/sbin/ip', 'addr', 'show') stderr: 2018-05-01 18:48:00,467+0800 DEBUG otopi.plugins.otopi.network.hostname hostname._validation:100 my addresses: ['X.X.X.X', 'X.X.X.X', 'X.X.X.X'] 2018-05-01 18:48:00,468+0800 DEBUG otopi.plugins.otopi.network.hostname hostname._validation:101 local addresses: [u'X.X.X.X', u'fe80::225:b3ff:fe00:7ee8', u'192.168.122.1', u'fe80::fc16:3eff:fe56:8be1'] 2018-05-01 18:48:00,473+0800 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.otopi.network.iptables.Plugin._validate 2018-05-01 18:48:00,473+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:00,478+0800 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.otopi.network.ssh.Plugin._validation 2018-05-01 18:48:00,478+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:00,483+0800 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._validate_hostname_first_host 2018-05-01 18:48:00,484+0800 DEBUG otopi.plugins.gr_he_common.network.bridge dialog.queryEnvKey:90 queryEnvKey called for key OVEHOSTED_NETWORK/host_name 2018-05-01 18:48:00,485+0800 WARNING otopi.plugins.gr_he_common.network.bridge hostname._validateFQDN:360 Host name STORAGE has no domain suffix 2018-05-01 18:48:00,486+0800 DEBUG otopi.plugins.gr_he_common.network.bridge hostname._validateFQDNresolvability:261 STORAGE resolves to: set(['X.X.X.X']) 2018-05-01 18:48:00,486+0800 DEBUG otopi.plugins.gr_he_common.network.bridge plugin.executeRaw:813 execute: ['/usr/bin/dig', 'STORAGE'], executable='None', cwd='None', env=None 2018-05-01 18:48:01,105+0800 DEBUG otopi.plugins.gr_he_common.network.bridge plugin.executeRaw:863 execute-result: ['/usr/bin/dig', 'STORAGE'], rc=0 2018-05-01 18:48:01,106+0800 DEBUG otopi.plugins.gr_he_common.network.bridge plugin.execute:921 execute-output: ['/usr/bin/dig', 'STORAGE'] stdout: ; <<>> DiG 9.9.4-RedHat-9.9.4-51.el7_4.2 <<>> STORAGE ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 33191 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;STORAGE. IN A ;; AUTHORITY SECTION: STORAGE. 3600 IN SOA ns0.centralnic.net. hostmaster.centralnic.net. 1000050858 900 1800 6048000 3600 ;; Query time: 502 msec ;; SERVER: X.X.X.X8#53(X.X.X.X8) ;; WHEN: Tue May 01 18:48:01 HKT 2018 ;; MSG SIZE rcvd: 101 2018-05-01 18:48:01,107+0800 DEBUG otopi.plugins.gr_he_common.network.bridge plugin.execute:926 execute-output: ['/usr/bin/dig', 'STORAGE'] stderr: 2018-05-01 18:48:01,108+0800 WARNING otopi.plugins.gr_he_common.network.bridge hostname._validateFQDNresolvability:280 Failed to resolve STORAGE using DNS, it can be resolved only locally 2018-05-01 18:48:01,109+0800 DEBUG otopi.plugins.gr_he_common.network.bridge plugin.executeRaw:813 execute: ('/usr/sbin/ip', 'addr'), executable='None', cwd='None', env=None 2018-05-01 18:48:01,122+0800 DEBUG otopi.plugins.gr_he_common.network.bridge plugin.executeRaw:863 execute-result: ('/usr/sbin/ip', 'addr'), rc=0 2018-05-01 18:48:01,123+0800 DEBUG otopi.plugins.gr_he_common.network.bridge plugin.execute:921 execute-output: ('/usr/sbin/ip', 'addr') stdout: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp6s0f0: mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 00:25:b3:00:7e:e8 brd ff:ff:ff:ff:ff:ff 3: enp6s0f1: mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 00:25:b3:00:7e:e8 brd ff:ff:ff:ff:ff:ff 4: enp6s0f2: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 00:25:b3:00:7e:ea brd ff:ff:ff:ff:ff:ff 5: enp6s0f3: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 00:25:b3:00:7e:eb brd ff:ff:ff:ff:ff:ff 6: bond0: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 00:25:b3:00:7e:e8 brd ff:ff:ff:ff:ff:ff inet X.X.X.X/27 brd X.X.X.X scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::225:b3ff:fe00:7ee8/64 scope link valid_lft forever preferred_lft forever 22: virbr0: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 52:54:00:f2:38:2e brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 23: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:f2:38:2e brd ff:ff:ff:ff:ff:ff 25: vnet0: mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN qlen 1000 link/ether fe:16:3e:56:8b:e1 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe56:8be1/64 scope link valid_lft forever preferred_lft forever 2018-05-01 18:48:01,123+0800 DEBUG otopi.plugins.gr_he_common.network.bridge plugin.execute:926 execute-output: ('/usr/sbin/ip', 'addr') stderr: 2018-05-01 18:48:01,124+0800 DEBUG otopi.plugins.gr_he_common.network.bridge hostname.getLocalAddresses:223 addresses: [u'X.X.X.X', u'192.168.122.1'] 2018-05-01 18:48:01,128+0800 INFO otopi.context context.runSequence:741 Stage: Transaction setup 2018-05-01 18:48:01,128+0800 DEBUG otopi.context context.runSequence:745 STAGE transaction-prepare 2018-05-01 18:48:01,131+0800 DEBUG otopi.context context._executeMethod:128 Stage transaction-prepare METHOD otopi.plugins.otopi.core.transaction.Plugin._main_prepare 2018-05-01 18:48:01,131+0800 DEBUG otopi.transaction transaction._prepare:61 preparing 'Yum Transaction' Loaded plugins: fastestmirror, product-id, subscription-manager 2018-05-01 18:48:01,166+0800 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Connection built: host=subscription.rhsm.redhat.com port=443 handler=/subscription auth=identity_cert ca_dir=/etc/rhsm/ca/ insecure=False This system is not registered with an entitlement server. You can use subscription-manager to register. 2018-05-01 18:48:01,318+0800 INFO otopi.context context.runSequence:741 Stage: Misc configuration 2018-05-01 18:48:01,319+0800 DEBUG otopi.context context.runSequence:745 STAGE early_misc 2018-05-01 18:48:01,321+0800 DEBUG otopi.context context._executeMethod:128 Stage early_misc METHOD otopi.plugins.otopi.network.firewalld.Plugin._early_misc 2018-05-01 18:48:01,322+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:01,325+0800 INFO otopi.context context.runSequence:741 Stage: Package installation 2018-05-01 18:48:01,325+0800 DEBUG otopi.context context.runSequence:745 STAGE packages 2018-05-01 18:48:01,328+0800 DEBUG otopi.context context._executeMethod:128 Stage packages METHOD otopi.plugins.otopi.network.iptables.Plugin._packages 2018-05-01 18:48:01,328+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:01,333+0800 DEBUG otopi.context context._executeMethod:128 Stage packages METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._packages 2018-05-01 18:48:01,333+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:01,338+0800 DEBUG otopi.context context._executeMethod:128 Stage packages METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._packages 2018-05-01 18:48:01,338+0800 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Building transaction 2018-05-01 18:48:01,403+0800 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Empty transaction 2018-05-01 18:48:01,403+0800 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Transaction Summary: 2018-05-01 18:48:01,404+0800 INFO otopi.context context.runSequence:741 Stage: Misc configuration 2018-05-01 18:48:01,404+0800 DEBUG otopi.context context.runSequence:745 STAGE misc 2018-05-01 18:48:01,405+0800 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.gr_he_common.core.misc.Plugin._misc_reached 2018-05-01 18:48:01,406+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 18:48:01,406+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/miscReached=bool:'True' 2018-05-01 18:48:01,406+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 18:48:01,407+0800 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.otopi.system.command.Plugin._misc 2018-05-01 18:48:01,409+0800 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.gr_he_common.core.ha_notifications.Plugin._misc 2018-05-01 18:48:01,409+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:01,411+0800 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.gr_he_common.network.bridge.Plugin._misc 2018-05-01 18:48:01,411+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:01,412+0800 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.gr_he_common.sanlock.lockspace.Plugin._misc 2018-05-01 18:48:01,413+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:01,414+0800 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._misc_backup_disk 2018-05-01 18:48:01,414+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:01,416+0800 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._misc 2018-05-01 18:48:01,416+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:01,417+0800 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.otopi.network.firewalld.Plugin._misc 2018-05-01 18:48:01,418+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:01,419+0800 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.otopi.network.iptables.Plugin._store_iptables 2018-05-01 18:48:01,419+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:01,421+0800 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.otopi.network.ssh.Plugin._append_key 2018-05-01 18:48:01,421+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:01,423+0800 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.otopi.system.clock.Plugin._set_clock 2018-05-01 18:48:01,423+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:01,424+0800 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.gr_he_common.vm.image.Plugin._misc 2018-05-01 18:48:01,424+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:01,426+0800 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._misc 2018-05-01 18:48:01,426+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 18:48:01,427+0800 INFO otopi.context context.runSequence:741 Stage: Transaction commit 2018-05-01 18:48:01,427+0800 DEBUG otopi.context context.runSequence:745 STAGE cleanup 2018-05-01 18:48:01,428+0800 DEBUG otopi.context context._executeMethod:128 Stage cleanup METHOD otopi.plugins.otopi.core.transaction.Plugin._main_end 2018-05-01 18:48:01,428+0800 DEBUG otopi.transaction transaction.commit:147 committing 'Yum Transaction' Loaded plugins: fastestmirror, product-id, subscription-manager 2018-05-01 18:48:01,439+0800 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Connection built: host=subscription.rhsm.redhat.com port=443 handler=/subscription auth=identity_cert ca_dir=/etc/rhsm/ca/ insecure=False This system is not registered with an entitlement server. You can use subscription-manager to register. 2018-05-01 18:48:01,489+0800 INFO otopi.context context.runSequence:741 Stage: Closing up 2018-05-01 18:48:01,489+0800 DEBUG otopi.context context.runSequence:745 STAGE closeup 2018-05-01 18:48:01,490+0800 DEBUG otopi.context context._executeMethod:128 Stage closeup METHOD otopi.plugins.gr_he_ansiblesetup.core.misc.Plugin._closeup 2018-05-01 18:48:01,490+0800 INFO otopi.plugins.gr_he_ansiblesetup.core.misc misc.initial_clean_up:229 Cleaning previous attempts 2018-05-01 18:48:01,491+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:153 ansible-playbook: cmd: ['/bin/ansible-playbook', '--module-path=/usr/share/ovirt-hosted-engine-setup/ansible', '--inventory=localhost, ovirt-engine-2.xxxx.net', '--extra-vars=@/tmp/tmppK0vRv', '/usr/share/ovirt-hosted-engine-setup/ansible/initial_clean.yml'] 2018-05-01 18:48:01,491+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:154 ansible-playbook: out_path: /tmp/tmpTtPqKJ 2018-05-01 18:48:01,491+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:155 ansible-playbook: vars_path: /tmp/tmppK0vRv 2018-05-01 18:48:01,491+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:156 ansible-playbook: env: {'HE_ANSIBLE_LOG_PATH': '/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-initial_clean-20180501184801-gdmt30.log', 'LESSOPEN': '||/usr/bin/lesspipe.sh %s', 'SSH_CLIENT': 'X.X.X.X 1629 22', 'LOGNAME': 'root', 'USER': 'root', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/Arcconf:/root/bin:/usr/Arcconf', 'HOME': '/root', 'GUESTFISH_RESTORE': '\\e[0m', 'GUESTFISH_INIT': '\\e[1;34m', 'LANG': 'en_US.UTF-8', 'TERM': 'screen', 'SHELL': '/bin/bash', 'SHLVL': '2', 'PWD': '/root', 'HISTSIZE': '1000', 'OTOPI_CALLBACK_OF': '/tmp/tmpTtPqKJ', 'XDG_RUNTIME_DIR': '/run/user/0', 'GUESTFISH_PS1': '\\[\\e[1;32m\\]>\\[\\e[0;31m\\] ', 'ANSIBLE_STDOUT_CALLBACK': '1_otopi_json', 'PYTHONPATH': '/usr/share/ovirt-hosted-engine-setup/scripts/..:', 'MAIL': '/var/spool/mail/root', 'ANSIBLE_CALLBACK_WHITELIST': '1_otopi_json,2_ovirt_logger', 'XDG_SESSION_ID': '461', 'STY': '9512.pts-0.STORAGE', 'TERMCAP': 'SC|screen|VT 100/ANSI X3.64 virtual terminal:\\\n\t:DO=\\E[%dB:LE=\\E[%dD:RI=\\E[%dC:UP=\\E[%dA:bs:bt=\\E[Z:\\\n\t:cd=\\E[J:ce=\\E[K:cl=\\E[H\\E[J:cm=\\E[%i%d;%dH:ct=\\E[3g:\\\n\t:do=^J:nd=\\E[C:pt:rc=\\E8:rs=\\Ec:sc=\\E7:st=\\EH:up=\\EM:\\\n\t:le=^H:bl=^G:cr=^M:it#8:ho=\\E[H:nw=\\EE:ta=^I:is=\\E)0:\\\n\t:li#63:co#237:am:xn:xv:LP:sr=\\EM:al=\\E[L:AL=\\E[%dL:\\\n\t:cs=\\E[%i%d;%dr:dl=\\E[M:DL=\\E[%dM:dc=\\E[P:DC=\\E[%dP:\\\n\t:im=\\E[4h:ei=\\E[4l:mi:IC=\\E[%d@:ks=\\E[?1h\\E=:\\\n\t:ke=\\E[?1l\\E>:vi=\\E[?25l:ve=\\E[34h\\E[?25h:vs=\\E[34l:\\\n\t:ti=\\E[?1049h:te=\\E[?1049l:us=\\E[4m:ue=\\E[24m:so=\\E[3m:\\\n\t:se=\\E[23m:mb=\\E[5m:md=\\E[1m:mr=\\E[7m:me=\\E[m:ms:\\\n\t:Co#8:pa#64:AF=\\E[3%dm:AB=\\E[4%dm:op=\\E[39;49m:AX:\\\n\t:vb=\\Eg:G0:as=\\E(0:ae=\\E(B:\\\n\t:ac=\\140\\140aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~..--++,,hhII00:\\\n\t:po=\\E[5i:pf=\\E[4i:Km=\\E[M:k0=\\E[10~:k1=\\EOP:k2=\\EOQ:\\\n\t:k3=\\EOR:k4=\\EOS:k5=\\E[15~:k6=\\E[17~:k7=\\E[18~:\\\n\t:k8=\\E[19~:k9=\\E[20~:k;=\\E[21~:F1=\\E[23~:F2=\\E[24~:\\\n\t:F3=\\E[1;2P:F4=\\E[1;2Q:F5=\\E[1;2R:F6=\\E[1;2S:\\\n\t:F7=\\E[15;2~:F8=\\E[17;2~:F9=\\E[18;2~:FA=\\E[19;2~:kb=\x7f:\\\n\t:K2=\\EOE:kB=\\E[Z:kF=\\E[1;2B:kR=\\E[1;2A:*4=\\E[3;2~:\\\n\t:*7=\\E[1;2F:#2=\\E[1;2H:#3=\\E[2;2~:#4=\\E[1;2D:%c=\\E[6;2~:\\\n\t:%e=\\E[5;2~:%i=\\E[1;2C:kh=\\E[1~:@1=\\E[1~:kH=\\E[4~:\\\n\t:@7=\\E[4~:kN=\\E[6~:kP=\\E[5~:kI=\\E[2~:kD=\\E[3~:ku=\\EOA:\\\n\t:kd=\\EOB:kr=\\EOC:kl=\\EOD:km:', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:', 'GUESTFISH_OUTPUT': '\\e[0m', 'SSH_TTY': '/dev/pts/0', 'HOSTNAME': 'STORAGE', 'HISTCONTROL': 'ignoredups', 'WINDOW': '0', 'OTOPI_LOGFILE': '/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log', 'SSH_CONNECTION': 'X.X.X.X 1629 X.X.X.X 22', 'OTOPI_EXECDIR': '/root'} 2018-05-01 18:48:02,803+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY [Clean previous deploy attempts] 2018-05-01 18:48:02,904+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Gathering Facts] 2018-05-01 18:48:04,508+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:48:04,910+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Stop libvirt service] 2018-05-01 18:48:06,314+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:06,817+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Drop vdsm config statements] 2018-05-01 18:48:09,423+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Restore initial abrt config files] 2018-05-01 18:48:14,534+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Restart abrtd service] 2018-05-01 18:48:15,437+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:15,738+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Drop libvirt sasl2 configuration by vdsm] 2018-05-01 18:48:16,341+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:16,642+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Stop and disable services] 2018-05-01 18:48:19,449+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Start libvirt] 2018-05-01 18:48:20,452+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:20,753+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Check for leftover local Hosted Engine VM] 2018-05-01 18:48:22,057+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:22,459+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Destroy leftover local Hosted Engine VM] 2018-05-01 18:48:22,761+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 skipping: [localhost] 2018-05-01 18:48:23,062+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Check for leftover defined local Hosted Engine VM] 2018-05-01 18:48:23,965+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:24,467+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Undefine leftover local engine VM] 2018-05-01 18:48:25,470+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:25,772+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Remove eventually entries for the local VM from known_hosts file] 2018-05-01 18:48:26,875+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:27,277+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [localhost] : ok: 12 changed: 9 unreachable: 0 skipped: 1 failed: 0 2018-05-01 18:48:27,378+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180 ansible-playbook rc: 0 2018-05-01 18:48:27,378+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187 ansible-playbook stdout: 2018-05-01 18:48:27,379+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190 ansible-playbook stderr: 2018-05-01 18:48:27,379+0800 DEBUG otopi.plugins.gr_he_ansiblesetup.core.misc misc.initial_clean_up:231 {} 2018-05-01 18:48:27,380+0800 INFO otopi.plugins.gr_he_ansiblesetup.core.misc misc._closeup:195 Starting local VM 2018-05-01 18:48:27,381+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:153 ansible-playbook: cmd: ['/bin/ansible-playbook', '--module-path=/usr/share/ovirt-hosted-engine-setup/ansible', '--inventory=localhost, ovirt-engine-2.xxxx.net', '--extra-vars=@/tmp/tmprrY4yl', '/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.yml'] 2018-05-01 18:48:27,382+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:154 ansible-playbook: out_path: /tmp/tmpbKI3Sk 2018-05-01 18:48:27,382+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:155 ansible-playbook: vars_path: /tmp/tmprrY4yl 2018-05-01 18:48:27,383+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:156 ansible-playbook: env: {'HE_ANSIBLE_LOG_PATH': '/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-bootstrap_local_vm-20180501184827-9r2md7.log', 'LESSOPEN': '||/usr/bin/lesspipe.sh %s', 'SSH_CLIENT': 'X.X.X.X 1629 22', 'LOGNAME': 'root', 'USER': 'root', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/Arcconf:/root/bin:/usr/Arcconf', 'HOME': '/root', 'GUESTFISH_RESTORE': '\\e[0m', 'GUESTFISH_INIT': '\\e[1;34m', 'LANG': 'en_US.UTF-8', 'TERM': 'screen', 'SHELL': '/bin/bash', 'SHLVL': '2', 'PWD': '/root', 'HISTSIZE': '1000', 'OTOPI_CALLBACK_OF': '/tmp/tmpbKI3Sk', 'XDG_RUNTIME_DIR': '/run/user/0', 'GUESTFISH_PS1': '\\[\\e[1;32m\\]>\\[\\e[0;31m\\] ', 'ANSIBLE_STDOUT_CALLBACK': '1_otopi_json', 'PYTHONPATH': '/usr/share/ovirt-hosted-engine-setup/scripts/..:', 'MAIL': '/var/spool/mail/root', 'ANSIBLE_CALLBACK_WHITELIST': '1_otopi_json,2_ovirt_logger', 'XDG_SESSION_ID': '461', 'STY': '9512.pts-0.STORAGE', 'TERMCAP': 'SC|screen|VT 100/ANSI X3.64 virtual terminal:\\\n\t:DO=\\E[%dB:LE=\\E[%dD:RI=\\E[%dC:UP=\\E[%dA:bs:bt=\\E[Z:\\\n\t:cd=\\E[J:ce=\\E[K:cl=\\E[H\\E[J:cm=\\E[%i%d;%dH:ct=\\E[3g:\\\n\t:do=^J:nd=\\E[C:pt:rc=\\E8:rs=\\Ec:sc=\\E7:st=\\EH:up=\\EM:\\\n\t:le=^H:bl=^G:cr=^M:it#8:ho=\\E[H:nw=\\EE:ta=^I:is=\\E)0:\\\n\t:li#63:co#237:am:xn:xv:LP:sr=\\EM:al=\\E[L:AL=\\E[%dL:\\\n\t:cs=\\E[%i%d;%dr:dl=\\E[M:DL=\\E[%dM:dc=\\E[P:DC=\\E[%dP:\\\n\t:im=\\E[4h:ei=\\E[4l:mi:IC=\\E[%d@:ks=\\E[?1h\\E=:\\\n\t:ke=\\E[?1l\\E>:vi=\\E[?25l:ve=\\E[34h\\E[?25h:vs=\\E[34l:\\\n\t:ti=\\E[?1049h:te=\\E[?1049l:us=\\E[4m:ue=\\E[24m:so=\\E[3m:\\\n\t:se=\\E[23m:mb=\\E[5m:md=\\E[1m:mr=\\E[7m:me=\\E[m:ms:\\\n\t:Co#8:pa#64:AF=\\E[3%dm:AB=\\E[4%dm:op=\\E[39;49m:AX:\\\n\t:vb=\\Eg:G0:as=\\E(0:ae=\\E(B:\\\n\t:ac=\\140\\140aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~..--++,,hhII00:\\\n\t:po=\\E[5i:pf=\\E[4i:Km=\\E[M:k0=\\E[10~:k1=\\EOP:k2=\\EOQ:\\\n\t:k3=\\EOR:k4=\\EOS:k5=\\E[15~:k6=\\E[17~:k7=\\E[18~:\\\n\t:k8=\\E[19~:k9=\\E[20~:k;=\\E[21~:F1=\\E[23~:F2=\\E[24~:\\\n\t:F3=\\E[1;2P:F4=\\E[1;2Q:F5=\\E[1;2R:F6=\\E[1;2S:\\\n\t:F7=\\E[15;2~:F8=\\E[17;2~:F9=\\E[18;2~:FA=\\E[19;2~:kb=\x7f:\\\n\t:K2=\\EOE:kB=\\E[Z:kF=\\E[1;2B:kR=\\E[1;2A:*4=\\E[3;2~:\\\n\t:*7=\\E[1;2F:#2=\\E[1;2H:#3=\\E[2;2~:#4=\\E[1;2D:%c=\\E[6;2~:\\\n\t:%e=\\E[5;2~:%i=\\E[1;2C:kh=\\E[1~:@1=\\E[1~:kH=\\E[4~:\\\n\t:@7=\\E[4~:kN=\\E[6~:kP=\\E[5~:kI=\\E[2~:kD=\\E[3~:ku=\\EOA:\\\n\t:kd=\\EOB:kr=\\EOC:kl=\\EOD:km:', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:', 'GUESTFISH_OUTPUT': '\\e[0m', 'SSH_TTY': '/dev/pts/0', 'HOSTNAME': 'STORAGE', 'HISTCONTROL': 'ignoredups', 'WINDOW': '0', 'OTOPI_LOGFILE': '/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log', 'SSH_CONNECTION': 'X.X.X.X 1629 X.X.X.X 22', 'OTOPI_EXECDIR': '/root'} 2018-05-01 18:48:28,796+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY [Prepare routing rules] 2018-05-01 18:48:28,898+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Gathering Facts] 2018-05-01 18:48:30,402+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:48:30,904+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Start libvirt] 2018-05-01 18:48:32,408+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:48:32,710+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Activate default libvirt network] 2018-05-01 18:48:33,713+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:48:34,015+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:48:34,316+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 virt_net_out: {'failed': False, u'changed': False} 2018-05-01 18:48:34,617+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Get libvirt interfaces] 2018-05-01 18:48:35,420+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:48:35,722+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Get routing rules] 2018-05-01 18:48:36,624+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:36,926+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:48:37,328+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 route_rules: {'stderr_lines': [], u'changed': True, u'end': u'2018-05-01 18:48:36.428569', u'stdout': u'0:\tfrom all lookup local \n100:\tfrom all to 192.168.122.1/24 lookup main \n101:\tfrom 192.168.122.1/24 lookup main \n32766:\tfrom all lookup main \n32767:\tfrom all lookup default ', u'cmd': [u'ip', u'rule'], 'failed': False, u'delta': u'0:00:00.003246', u'stderr': u'', u'rc': 0, 'stdout_lines': [u'0:\tfrom all lookup local ', u'100:\tfrom all to 192.168.122.1/24 lookup main ', u'101:\tfrom 192.168.122.1/24 lookup main ', u'32766:\tfrom all lookup main ', u'32767:\tfrom all lookup default '], u'start': u'2018-05-01 18:48:36.425323'} 2018-05-01 18:48:37,629+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Save bridge name] 2018-05-01 18:48:37,931+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:48:38,333+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Wait for the bridge to appear on the host] 2018-05-01 18:48:39,035+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:39,537+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Refresh network facts] 2018-05-01 18:48:39,839+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 skipping: [localhost] 2018-05-01 18:48:40,141+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Prepare CIDR for virbr0] 2018-05-01 18:48:40,543+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:48:41,045+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Add outbound route rules] 2018-05-01 18:48:41,347+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 skipping: [localhost] 2018-05-01 18:48:41,648+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:48:41,950+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 result: {'skipped': True, 'changed': False, 'skip_reason': u'Conditional result was False'} 2018-05-01 18:48:42,351+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Add inbound route rules] 2018-05-01 18:48:42,553+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 skipping: [localhost] 2018-05-01 18:48:42,854+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:48:43,255+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 result: {'skipped': True, 'changed': False, 'skip_reason': u'Conditional result was False'} 2018-05-01 18:48:43,557+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY [Create hosted engine local vm] 2018-05-01 18:48:43,657+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Gathering Facts] 2018-05-01 18:48:44,961+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:48:45,563+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Register the engine FQDN as a host] 2018-05-01 18:48:45,965+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:46,367+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Create directory for local VM] 2018-05-01 18:48:47,270+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:47,873+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Set local vm dir path] 2018-05-01 18:48:48,174+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:48:48,576+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Fix local VM directory permission] 2018-05-01 18:48:49,580+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:49,881+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [include_tasks] 2018-05-01 18:48:50,183+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:48:50,485+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Install ovirt-engine-appliance rpm] 2018-05-01 18:48:51,689+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:48:51,990+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Parse appliance configuration for path] 2018-05-01 18:48:52,893+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:53,295+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:48:53,697+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 APPLIANCE_OVA_OUT: {'stderr_lines': [], u'changed': True, u'end': u'2018-05-01 18:48:52.645816', u'stdout': u'/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.2-20180329.1.el7.centos.ova', u'cmd': u"grep path /etc/ovirt-hosted-engine/10-appliance.conf | cut -f2 -d'='", 'failed': False, u'delta': u'0:00:00.007232', u'stderr': u'', u'rc': 0, 'stdout_lines': [u'/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.2-20180329.1.el7.centos.ova'], u'start': u'2018-05-01 18:48:52.638584'} 2018-05-01 18:48:53,998+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Parse appliance configuration for sha1sum] 2018-05-01 18:48:54,600+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:48:54,902+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:48:55,304+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 APPLIANCE_OVA_SHA1: {'stderr_lines': [], u'changed': True, u'end': u'2018-05-01 18:48:54.410083', u'stdout': u'9021e2d6c7062af23a291f1b4ba6357359e26a42', u'cmd': u"grep sha1sum /etc/ovirt-hosted-engine/10-appliance.conf | cut -f2 -d'='", 'failed': False, u'delta': u'0:00:00.005222', u'stderr': u'', u'rc': 0, 'stdout_lines': [u'9021e2d6c7062af23a291f1b4ba6357359e26a42'], u'start': u'2018-05-01 18:48:54.404861'} 2018-05-01 18:48:55,605+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Get OVA path] 2018-05-01 18:48:55,907+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:48:56,309+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:48:56,610+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 APPLIANCE_OVA_PATH: /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.2-20180329.1.el7.centos.ova 2018-05-01 18:48:56,911+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Compute sha1sum] 2018-05-01 18:48:59,318+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:48:59,720+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:49:00,021+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 ova_stats: {'failed': False, u'stat': {u'charset': u'binary', u'uid': 0, u'exists': True, u'attr_flags': u'e', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1522339491.0, u'block_size': 4096, u'inode': 2624025, u'isgid': False, u'size': 991358359, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': u'18446744073621295311', u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 1936256, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'root', u'path': u'/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.2-20180329.1.el7.centos.ova', u'xusr': False, u'atime': 1525052771.2458162, u'mimetype': u'application/x-gzip', u'ctime': 1524856606.5795045, u'isblk': False, u'xgrp': False, u'dev': 2052, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'checksum': u'9021e2d6c7062af23a291f1b4ba6357359e26a42', u'islnk': False, u'attributes': [u'extents']}, u'changed': False} 2018-05-01 18:49:00,322+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Compare sha1sum] 2018-05-01 18:49:00,624+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 skipping: [localhost] 2018-05-01 18:49:00,926+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Register appliance PATH] 2018-05-01 18:49:01,328+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 skipping: [localhost] 2018-05-01 18:49:01,629+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:49:01,931+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 APPLIANCE_OVA_PATH: /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.2-20180329.1.el7.centos.ova 2018-05-01 18:49:02,232+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Extract appliance to local VM directory] 2018-05-01 18:49:57,453+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:49:57,755+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Find the appliance image] 2018-05-01 18:49:58,558+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:49:59,060+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:49:59,361+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 app_img: {u'files': [{u'uid': 0, u'woth': False, u'mtime': 1522339061.0, u'inode': 1572898, u'isgid': False, u'size': 3090546688, u'roth': True, u'isuid': False, u'isreg': True, u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True, u'xoth': True, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'root', u'path': u'/var/tmp/localvmUHZdJd/images/b1b4cec4-8d07-4195-a115-bf30ab581acb/3c52782d-9eef-4ec6-b6cb-275a566358e1', u'xusr': True, u'atime': 1525171779.668793, u'isdir': False, u'ctime': 1525171796.9551125, u'wgrp': False, u'xgrp': True, u'dev': 2053, u'isblk': False, u'isfifo': False, u'mode': u'0755', u'islnk': False}], u'changed': False, 'failed': False, u'examined': 3, u'msg': u'', u'matched': 1} 2018-05-01 18:49:59,763+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Get appliance disk size] 2018-05-01 18:50:00,666+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:50:00,967+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:50:01,369+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 qemu_img_out: {'stderr_lines': [], u'changed': True, u'end': u'2018-05-01 18:50:00.476000', u'stdout': u'{\n "virtual-size": 53687091200,\n "filename": "/var/tmp/localvmUHZdJd/images/b1b4cec4-8d07-4195-a115-bf30ab581acb/3c52782d-9eef-4ec6-b6cb-275a566358e1",\n "cluster-size": 65536,\n "format": "qcow2",\n "actual-size": 3090550784,\n "format-specific": {\n "type": "qcow2",\n "data": {\n "compat": "1.1",\n "lazy-refcounts": false,\n "refcount-bits": 16,\n "corrupt": false\n }\n },\n "dirty-flag": false\n}', u'cmd': [u'qemu-img', u'info', u'--output=json', u'/var/tmp/localvmUHZdJd/images/b1b4cec4-8d07-4195-a115-bf30ab581acb/3c52782d-9eef-4ec6-b6cb-275a566358e1'], 'failed': False, u'delta': u'0:00:00.156726', u'stderr': u'', u'rc': 0, 'stdout_lines': [u'{', u' "virtual-size": 53687091200,', u' "filename": "/var/tmp/localvmUHZdJd/images/b1b4cec4-8d07-4195-a115-bf30ab581acb/3c52782d-9eef-4ec6-b6cb-275a566358e1",', u' "cluster-size": 65536,', u' "format": "qcow2",', u' "actual-size": 3090550784,', u' "format-specific": {', u' "type": "qcow2",', u' "data": {', u' "compat": "1.1",', u' "lazy-refcounts": false,', u' "refcount-bits": 16,', u' "corrupt": false', u' }', u' },', u' "dirty-flag": false', u'}'], u'start': u'2018-05-01 18:50:00.319274'} 2018-05-01 18:50:01,670+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Parse qemu-img output] 2018-05-01 18:50:02,072+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:50:02,474+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:50:02,775+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 virtual_size: 53687091200 2018-05-01 18:50:03,077+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Create cloud init user-data and meta-data files] 2018-05-01 18:50:05,883+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Create ISO disk] 2018-05-01 18:50:06,486+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:50:06,787+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Create local VM] 2018-05-01 18:50:08,893+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:50:09,194+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:50:09,595+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 create_local_vm: {'stderr_lines': [], u'changed': True, u'end': u'2018-05-01 18:50:08.660986', u'stdout': u'\nStarting install...\nDomain creation completed.', u'cmd': [u'virt-install', u'-n', u'HostedEngineLocal', u'--os-variant', u'rhel7', u'--virt-type', u'kvm', u'--memory', u'4096', u'--vcpus', u'4', u'--network', u'network=default,mac=00:16:3e:22:82:01,model=virtio', u'--disk', u'/var/tmp/localvmUHZdJd/images/b1b4cec4-8d07-4195-a115-bf30ab581acb/3c52782d-9eef-4ec6-b6cb-275a566358e1', u'--import', u'--disk', u'path=/var/tmp/localvmUHZdJd/seed.iso,device=cdrom', u'--noautoconsole', u'--rng', u'/dev/random', u'--graphics', u'vnc', u'--video', u'vga', u'--sound', u'none', u'--controller', u'usb,model=none', u'--memballoon', u'none', u'--boot', u'hd,menu=off', u'--clock', u'kvmclock_present=yes'], 'failed': False, u'delta': u'0:00:01.300819', u'stderr': u'', u'rc': 0, 'stdout_lines': [u'', u'Starting install...', u'Domain creation completed.'], u'start': u'2018-05-01 18:50:07.360167'} 2018-05-01 18:50:09,896+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Get local VM IP] 2018-05-01 18:50:21,318+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:50:21,820+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:50:22,121+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 local_vm_ip: {'stderr_lines': [], u'changed': True, u'end': u'2018-05-01 18:50:21.168308', u'stdout': u'192.168.122.213', u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00:16:3e:22:82:01 | awk '{ print $5 }' | cut -f1 -d'/'", 'failed': False, 'attempts': 2, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.054083', 'stdout_lines': [u'192.168.122.213'], u'start': u'2018-05-01 18:50:21.114225'} 2018-05-01 18:50:22,522+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Remove eventually entries for the local VM from /etc/hosts] 2018-05-01 18:50:23,324+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:50:23,625+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Create an entry in /etc/hosts for the local VM] 2018-05-01 18:50:24,227+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:50:24,629+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Wait for SSH to restart on the local VM] 2018-05-01 18:50:55,688+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost -> localhost] 2018-05-01 18:50:56,290+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY [engine] 2018-05-01 18:50:56,391+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Gathering Facts] 2018-05-01 18:50:59,197+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [ovirt-engine-2.xxxx.net] 2018-05-01 18:50:59,800+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Wait for the local VM] 2018-05-01 18:51:06,414+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [ovirt-engine-2.xxxx.net] 2018-05-01 18:51:06,716+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Add an entry for this host on /etc/hosts on the local VM] 2018-05-01 18:51:08,621+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [ovirt-engine-2.xxxx.net] 2018-05-01 18:51:09,123+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Set FQDN] 2018-05-01 18:51:11,227+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [ovirt-engine-2.xxxx.net] 2018-05-01 18:51:11,628+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Force the local VM FQDN to resolve on 127.0.0.1] 2018-05-01 18:51:13,532+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [ovirt-engine-2.xxxx.net] 2018-05-01 18:51:13,934+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Restore sshd reverse DNS lookups] 2018-05-01 18:51:15,538+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [ovirt-engine-2.xxxx.net] 2018-05-01 18:51:15,940+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Generate an answer file for engine-setup] 2018-05-01 18:51:18,747+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [ovirt-engine-2.xxxx.net] 2018-05-01 18:51:19,149+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Include before engine-setup custom tasks files for the engine VM] 2018-05-01 18:51:19,851+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:51:20,253+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 include_before_engine_setup_results: {'skipped_reason': u'No items in the list', 'skipped': True, 'results': [], 'changed': False} 2018-05-01 18:51:20,654+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Execute engine-setup] 2018-05-01 18:53:11,559+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [ovirt-engine-2.xxxx.net] 2018-05-01 18:53:12,161+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:53:12,662+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 engine_setup_out: {'stderr_lines': [], u'changed': True, u'end': u'2018-05-01 18:54:32.679604', u'stdout': u"[ INFO ] Stage: Initializing\n[ INFO ] Stage: Environment setup\n Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/root/ovirt-engine-answers', '/root/heanswers.conf']\n Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20180501185244-gtleb5.log\n Version: otopi-1.7.7 (otopi-1.7.7-1.el7.centos)\n[ INFO ] Stage: Environment packages setup\n[ INFO ] Stage: Programs detection\n[ INFO ] Stage: Environment setup\n[ INFO ] Stage: Environment customization\n \n --== PRODUCT OPTIONS ==--\n \n Configure ovirt-provider-ovn (Yes, No) [Yes]: \n Configure Image I/O Proxy on this host (Yes, No) [Yes]: \n \n --== PACKAGES ==--\n \n \n --== NETWORK CONFIGURATION ==--\n \n[ INFO ] firewalld will be configured as firewall manager.\n \n --== DATABASE CONFIGURATION ==--\n \n \n --== OVIRT ENGINE CONFIGURATION ==--\n \n Use default credentials (admin at internal) for ovirt-provider-ovn (Yes, No) [Yes]: \n \n --== STORAGE CONFIGURATION ==--\n \n \n --== PKI CONFIGURATION ==--\n \n \n --== APACHE CONFIGURATION ==--\n \n \n --== SYSTEM CONFIGURATION ==--\n \n \n --== MISC CONFIGURATION ==--\n \n Please choose Data Warehouse sampling scale:\n (1) Basic\n (2) Full\n (1, 2)[1]: \n \n --== END OF CONFIGURATION ==--\n \n[ INFO ] Stage: Setup validation\n[WARNING] Cannot validate host name settings, reason: resolved host does not match any of the local addresses\n[WARNING] Less than 16384MB of memory is available\n \n --== CONFIGURATION PREVIEW ==--\n \n Application mode : both\n Default SAN wipe after delete : False\n Firewall manager : firewalld\n Update Firewall : True\n Host FQDN : ovirt-engine-2.xxxx.net\n Configure local Engine database : True\n Set application as default page : True\n Configure Apache SSL : True\n Engine database secured connection : False\n Engine database user name : engine\n Engine database name : engine\n Engine database host : localhost\n Engine database port : 5432\n Engine database host name validation : False\n Engine installation : True\n PKI organization : xxxx.net\n Set up ovirt-provider-ovn : True\n Configure WebSocket Proxy : True\n DWH installation : True\n DWH database secured connection : False\n DWH database host : localhost\n Configure local DWH database : True\n Configure Image I/O Proxy : True\n Configure VMConsole Proxy : True\n[ INFO ] Stage: Transaction setup\n[ INFO ] Stopping engine service\n[ INFO ] Stopping ovirt-fence-kdump-listener service\n[ INFO ] Stopping dwh service\n[ INFO ] Stopping Image I/O Proxy service\n[ INFO ] Stopping vmconsole-proxy service\n[ INFO ] Stopping websocket-proxy service\n[ INFO ] Stage: Misc configuration\n[ INFO ] Stage: Package installation\n[ INFO ] Stage: Misc configuration\n[ INFO ] Upgrading CA\n[ INFO ] Initializing PostgreSQL\n[ INFO ] Creating PostgreSQL 'engine' database\n[ INFO ] Configuring PostgreSQL\n[ INFO ] Creating PostgreSQL 'ovirt_engine_history' database\n[ INFO ] Configuring PostgreSQL\n[ INFO ] Creating CA\n[ INFO ] Creating/refreshing DWH database schema\n[ INFO ] Configuring Image I/O Proxy\n[ INFO ] Setting up ovirt-vmconsole proxy helper PKI artifacts\n[ INFO ] Setting up ovirt-vmconsole SSH PKI artifacts\n[ INFO ] Configuring WebSocket Proxy\n[ INFO ] Creating/refreshing Engine database schema\n[ INFO ] Creating/refreshing Engine 'internal' domain database schema\n[ INFO ] Adding default OVN provider to database\n[ INFO ] Adding OVN provider secret to database\n[ INFO ] Setting a password for internal user admin\n[ INFO ] Generating post install configuration file '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'\n[ INFO ] Stage: Transaction commit\n[ INFO ] Stage: Closing up\n[ INFO ] Starting engine service\n[ INFO ] Starting dwh service\n[ INFO ] Restarting ovirt-vmconsole proxy service\n \n --== SUMMARY ==--\n \n[ INFO ] Restarting httpd\n Please use the user 'admin at internal' and password specified in order to login\n Web access is enabled at:\n http://ovirt-engine-2.xxxx.net:80/ovirt-engine\n https://ovirt-engine-2.xxxx.net:443/ovirt-engine\n Internal CA 5E:C1:76:E4:CF:9D:5A:95:CF:80:C1:A3:DA:47:D4:2C:7C:8F:A4:65\n SSH fingerprint: SHA256:djaEwoxC2/EKf5vGu8xd6ee5iJRuub6/C523dOnJ+60\n[WARNING] Less than 16384MB of memory is available\n \n --== END OF SUMMARY ==--\n \n[ INFO ] Stage: Clean up\n Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20180501185244-gtleb5.log\n[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20180501185432-setup.conf'\n[ INFO ] Stage: Pre-termination\n[ INFO ] Stage: Termination\n[ INFO ] Execution of setup completed successfully", u'cmd': [u'/usr/bin/engine-setup', u'--offline', u'--config-append=/root/ovirt-engine-answers', u'--config-append=/root/heanswers.conf'], 'failed': False, u'delta': u'0:01:49.047038', u'stderr': u'', u'rc': 0, 'stdout_lines': [u'[ INFO ] Stage: Initializing', u'[ INFO ] Stage: Environment setup', u" Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/root/ovirt-engine-answers', '/root/heanswers.conf']", u' Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20180501185244-gtleb5.log', u' Version: otopi-1.7.7 (otopi-1.7.7-1.el7.centos)', u'[ INFO ] Stage: Environment packages setup', u'[ INFO ] Stage: Programs detection', u'[ INFO ] Stage: Environment setup', u'[ INFO ] Stage: Environment customization', u' ', u' --== PRODUCT OPTIONS ==--', u' ', u' Configure ovirt-provider-ovn (Yes, No) [Yes]: ', u' Configure Image I/O Proxy on this host (Yes, No) [Yes]: ', u' ', u' --== PACKAGES ==--', u' ', u' ', u' --== NETWORK CONFIGURATION ==--', u' ', u'[ INFO ] firewalld will be configured as firewall manager.', u' ', u' --== DATABASE CONFIGURATION ==--', u' ', u' ', u' --== OVIRT ENGINE CONFIGURATION ==--', u' ', u' Use default credentials (admin at internal) for ovirt-provider-ovn (Yes, No) [Yes]: ', u' ', u' --== STORAGE CONFIGURATION ==--', u' ', u' ', u' --== PKI CONFIGURATION ==--', u' ', u' ', u' --== APACHE CONFIGURATION ==--', u' ', u' ', u' --== SYSTEM CONFIGURATION ==--', u' ', u' ', u' --== MISC CONFIGURATION ==--', u' ', u' Please choose Data Warehouse sampling scale:', u' (1) Basic', u' (2) Full', u' (1, 2)[1]: ', u' ', u' --== END OF CONFIGURATION ==--', u' ', u'[ INFO ] Stage: Setup validation', u'[WARNING] Cannot validate host name settings, reason: resolved host does not match any of the local addresses', u'[WARNING] Less than 16384MB of memory is available', u' ', u' --== CONFIGURATION PREVIEW ==--', u' ', u' Application mode : both', u' Default SAN wipe after delete : False', u' Firewall manager : firewalld', u' Update Firewall : True', u' Host FQDN : ovirt-engine-2.xxxx.net', u' Configure local Engine database : True', u' Set application as default page : True', u' Configure Apache SSL : True', u' Engine database secured connection : False', u' Engine database user name : engine', u' Engine database name : engine', u' Engine database host : localhost', u' Engine database port : 5432', u' Engine database host name validation : False', u' Engine installation : True', u' PKI organization : xxxx.net', u' Set up ovirt-provider-ovn : True', u' Configure WebSocket Proxy : True', u' DWH installation : True', u' DWH database secured connection : False', u' DWH database host : localhost', u' Configure local DWH database : True', u' Configure Image I/O Proxy : True', u' Configure VMConsole Proxy : True', u'[ INFO ] Stage: Transaction setup', u'[ INFO ] Stopping engine service', u'[ INFO ] Stopping ovirt-fence-kdump-listener service', u'[ INFO ] Stopping dwh service', u'[ INFO ] Stopping Image I/O Proxy service', u'[ INFO ] Stopping vmconsole-proxy service', u'[ INFO ] Stopping websocket-proxy service', u'[ INFO ] Stage: Misc configuration', u'[ INFO ] Stage: Package installation', u'[ INFO ] Stage: Misc configuration', u'[ INFO ] Upgrading CA', u'[ INFO ] Initializing PostgreSQL', u"[ INFO ] Creating PostgreSQL 'engine' database", u'[ INFO ] Configuring PostgreSQL', u"[ INFO ] Creating PostgreSQL 'ovirt_engine_history' database", u'[ INFO ] Configuring PostgreSQL', u'[ INFO ] Creating CA', u'[ INFO ] Creating/refreshing DWH database schema', u'[ INFO ] Configuring Image I/O Proxy', u'[ INFO ] Setting up ovirt-vmconsole proxy helper PKI artifacts', u'[ INFO ] Setting up ovirt-vmconsole SSH PKI artifacts', u'[ INFO ] Configuring WebSocket Proxy', u'[ INFO ] Creating/refreshing Engine database schema', u"[ INFO ] Creating/refreshing Engine 'internal' domain database schema", u'[ INFO ] Adding default OVN provider to database', u'[ INFO ] Adding OVN provider secret to database', u'[ INFO ] Setting a password for internal user admin', u"[ INFO ] Generating post install configuration file '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'", u'[ INFO ] Stage: Transaction commit', u'[ INFO ] Stage: Closing up', u'[ INFO ] Starting engine service', u'[ INFO ] Starting dwh service', u'[ INFO ] Restarting ovirt-vmconsole proxy service', u' ', u' --== SUMMARY ==--', u' ', u'[ INFO ] Restarting httpd', u" Please use the user 'admin at internal' and password specified in order to login", u' Web access is enabled at:', u' http://ovirt-engine-2.xxxx.net:80/ovirt-engine', u' https://ovirt-engine-2.xxxx.net:443/ovirt-engine', u' Internal CA 5E:C1:76:E4:CF:9D:5A:95:CF:80:C1:A3:DA:47:D4:2C:7C:8F:A4:65', u' SSH fingerprint: SHA256:djaEwoxC2/EKf5vGu8xd6ee5iJRuub6/C523dOnJ+60', u'[WARNING] Less than 16384MB of memory is available', u' ', u' --== END OF SUMMARY ==--', u' ', u'[ INFO ] Stage: Clean up', u' Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20180501185244-gtleb5.log', u"[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20180501185432-setup.conf'", u'[ INFO ] Stage: Pre-termination', u'[ INFO ] Stage: Termination', u'[ INFO ] Execution of setup completed successfully'], u'start': u'2018-05-01 18:52:43.632566'} 2018-05-01 18:53:13,164+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Include after engine-setup custom tasks files for the engine VM] 2018-05-01 18:53:14,066+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:53:14,568+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 include_after_engine_setup_results: {'skipped_reason': u'No items in the list', 'skipped': True, 'results': [], 'changed': False} 2018-05-01 18:53:14,869+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Configure LibgfApi support] 2018-05-01 18:53:15,270+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 skipping: [ovirt-engine-2.xxxx.net] 2018-05-01 18:53:15,671+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:53:16,072+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 libgfapi_support_out: {'skipped': True, 'changed': False, 'skip_reason': u'Conditional result was False'} 2018-05-01 18:53:16,473+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Restart ovirt-engine service for LibgfApi support] 2018-05-01 18:53:16,874+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 skipping: [ovirt-engine-2.xxxx.net] 2018-05-01 18:53:17,276+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:53:17,777+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 restart_libgfapi_support_out: {'skipped': True, 'changed': False, 'skip_reason': u'Conditional result was False'} 2018-05-01 18:53:18,178+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Mask cloud-init services to speed up future boot] 2018-05-01 18:53:22,986+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Clean up bootstrap answer file] 2018-05-01 18:53:24,489+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [ovirt-engine-2.xxxx.net] 2018-05-01 18:53:24,890+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY [localhost] 2018-05-01 18:53:24,991+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Gathering Facts] 2018-05-01 18:53:26,695+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:53:27,297+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Wait for ovirt-engine service to start] 2018-05-01 18:53:28,901+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:53:29,604+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:53:30,106+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 engine_status: {u'status': 200, u'content_length': u'31', u'cookies': {u'locale': u'en_US'}, u'set_cookie': u'locale=en_US; path=/; HttpOnly; Max-Age=2147483647; Expires=Sun, 19-May-2086 14:08:57 GMT', u'url': u'http://ovirt-engine-2.xxxx.net/ovirt-engine/services/health', u'changed': False, u'vary': u'Accept-Encoding', 'attempts': 1, u'server': u'Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips', u'content': u'DB Up!Welcome to Health Status!', 'failed': False, u'connection': u'close', u'content_encoding': u'identity', u'content_type': u'text/html;charset=ISO-8859-1', u'date': u'Tue, 01 May 2018 10:54:50 GMT', u'redirected': False, u'msg': u'OK (31 bytes)'} 2018-05-01 18:53:30,507+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Detect VLAN ID] 2018-05-01 18:53:31,410+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:53:31,913+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 18:53:32,314+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 vlan_id_out: {'stderr_lines': [], u'changed': True, u'end': u'2018-05-01 18:53:31.187734', u'stdout': u'', u'cmd': u"ip -d link show bond0 | grep vlan | grep -Po 'id \\K[\\d]+' | cat", 'failed': False, u'delta': u'0:00:00.009722', u'stderr': u'', u'rc': 0, 'stdout_lines': [], u'start': u'2018-05-01 18:53:31.178012'} 2018-05-01 18:53:32,916+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Set Engine public key as authorized key without validating the TLS/SSL certificates] 2018-05-01 18:53:34,120+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:53:34,622+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [include_tasks] 2018-05-01 18:53:35,024+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:53:35,525+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Obtain SSO token using username/password credentials] 2018-05-01 18:53:37,330+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:53:37,732+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Enable GlusterFS at cluster level] 2018-05-01 18:53:38,234+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 skipping: [localhost] 2018-05-01 18:53:38,636+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Set VLAN ID at datacenter level] 2018-05-01 18:53:39,138+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 skipping: [localhost] 2018-05-01 18:53:39,540+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Force host-deploy in offline mode] 2018-05-01 18:53:41,044+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 18:53:41,546+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Add host] 2018-05-01 18:53:43,250+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 18:53:43,751+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Wait for the host to be up] 2018-05-01 19:05:28,968+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 {u'_ansible_parsed': True, u'_ansible_no_log': False, u'changed': False, u'attempts': 120, u'invocation': {u'module_args': {u'pattern': u'name=STORAGE', u'fetch_nested': False, u'nested_attributes': []}}, u'ansible_facts': {u'ovirt_hosts': []}} 2018-05-01 19:05:29,069+0800 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:98 fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, "attempts": 120, "changed": false} 2018-05-01 19:05:29,571+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [include_tasks] 2018-05-01 19:05:30,073+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 19:05:30,475+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Remove local vm dir] 2018-05-01 19:05:31,578+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-05-01 19:05:32,080+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 19:05:32,582+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 rm_localvm_dir: {'failed': False, u'state': u'absent', u'changed': True, u'diff': {u'after': {u'path': u'/var/tmp/localvmUHZdJd', u'state': u'absent'}, u'before': {u'path': u'/var/tmp/localvmUHZdJd', u'state': u'directory'}}, u'path': u'/var/tmp/localvmUHZdJd'} 2018-05-01 19:05:33,084+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Notify the user about a failure] 2018-05-01 19:05:33,486+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 {u'msg': u'The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n', u'changed': False, u'_ansible_no_log': False} 2018-05-01 19:05:33,587+0800 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:98 fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} 2018-05-01 19:05:34,089+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [localhost] : ok: 57 changed: 19 unreachable: 0 skipped: 7 failed: 2 2018-05-01 19:05:34,190+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [ovirt-engine-2.xxxx.net] : ok: 15 changed: 8 unreachable: 0 skipped: 4 failed: 0 2018-05-01 19:05:34,291+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180 ansible-playbook rc: 2 2018-05-01 19:05:34,292+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187 ansible-playbook stdout: 2018-05-01 19:05:34,292+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:189 to retry, use: --limit @/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry 2018-05-01 19:05:34,292+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190 ansible-playbook stderr: 2018-05-01 19:05:34,293+0800 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:192 [DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using 2018-05-01 19:05:34,293+0800 DEBUG otopi.plugins.otopi.dialog.human human.format:69 newline sent to logger 2018-05-01 19:05:34,294+0800 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:192 `result|succeeded` instead use `result is succeeded`. This feature will be 2018-05-01 19:05:34,294+0800 DEBUG otopi.plugins.otopi.dialog.human human.format:69 newline sent to logger 2018-05-01 19:05:34,295+0800 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:192 removed in version 2.9. Deprecation warnings can be disabled by setting 2018-05-01 19:05:34,296+0800 DEBUG otopi.plugins.otopi.dialog.human human.format:69 newline sent to logger 2018-05-01 19:05:34,296+0800 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:192 deprecation_warnings=False in ansible.cfg. 2018-05-01 19:05:34,297+0800 DEBUG otopi.plugins.otopi.dialog.human human.format:69 newline sent to logger 2018-05-01 19:05:34,297+0800 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:192 [DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using 2018-05-01 19:05:34,298+0800 DEBUG otopi.plugins.otopi.dialog.human human.format:69 newline sent to logger 2018-05-01 19:05:34,298+0800 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:192 `result|succeeded` instead use `result is succeeded`. This feature will be 2018-05-01 19:05:34,299+0800 DEBUG otopi.plugins.otopi.dialog.human human.format:69 newline sent to logger 2018-05-01 19:05:34,299+0800 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:192 removed in version 2.9. Deprecation warnings can be disabled by setting 2018-05-01 19:05:34,300+0800 DEBUG otopi.plugins.otopi.dialog.human human.format:69 newline sent to logger 2018-05-01 19:05:34,300+0800 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:192 deprecation_warnings=False in ansible.cfg. 2018-05-01 19:05:34,301+0800 DEBUG otopi.plugins.otopi.dialog.human human.format:69 newline sent to logger 2018-05-01 19:05:34,302+0800 DEBUG otopi.context context._executeMethod:143 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in _executeMethod method['method']() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py", line 196, in _closeup r = ah.run() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", line 194, in run raise RuntimeError(_('Failed executing ansible-playbook')) RuntimeError: Failed executing ansible-playbook 2018-05-01 19:05:34,304+0800 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Closing up': Failed executing ansible-playbook 2018-05-01 19:05:34,305+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 19:05:34,305+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-05-01 19:05:34,306+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(, RuntimeError('Failed executing ansible-playbook',), )]' 2018-05-01 19:05:34,309+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 19:05:34,309+0800 INFO otopi.context context.runSequence:741 Stage: Clean up 2018-05-01 19:05:34,310+0800 DEBUG otopi.context context.runSequence:745 STAGE cleanup 2018-05-01 19:05:34,313+0800 DEBUG otopi.context context._executeMethod:128 Stage cleanup METHOD otopi.plugins.gr_he_ansiblesetup.core.misc.Plugin._cleanup 2018-05-01 19:05:34,313+0800 INFO otopi.plugins.gr_he_ansiblesetup.core.misc misc._cleanup:246 Cleaning temporary resources 2018-05-01 19:05:34,315+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:153 ansible-playbook: cmd: ['/bin/ansible-playbook', '--module-path=/usr/share/ovirt-hosted-engine-setup/ansible', '--inventory=localhost,', '--extra-vars=@/tmp/tmpeaXRWM', '/usr/share/ovirt-hosted-engine-setup/ansible/final_clean.yml'] 2018-05-01 19:05:34,315+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:154 ansible-playbook: out_path: /tmp/tmpUNR1kD 2018-05-01 19:05:34,315+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:155 ansible-playbook: vars_path: /tmp/tmpeaXRWM 2018-05-01 19:05:34,316+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:156 ansible-playbook: env: {'HE_ANSIBLE_LOG_PATH': '/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-final_clean-20180501190534-5tt7hr.log', 'LESSOPEN': '||/usr/bin/lesspipe.sh %s', 'SSH_CLIENT': 'X.X.X.X 1629 22', 'LOGNAME': 'root', 'USER': 'root', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/Arcconf:/root/bin:/usr/Arcconf', 'HOME': '/root', 'GUESTFISH_RESTORE': '\\e[0m', 'GUESTFISH_INIT': '\\e[1;34m', 'LANG': 'en_US.UTF-8', 'TERM': 'screen', 'SHELL': '/bin/bash', 'SHLVL': '2', 'PWD': '/root', 'HISTSIZE': '1000', 'OTOPI_CALLBACK_OF': '/tmp/tmpUNR1kD', 'XDG_RUNTIME_DIR': '/run/user/0', 'GUESTFISH_PS1': '\\[\\e[1;32m\\]>\\[\\e[0;31m\\] ', 'ANSIBLE_STDOUT_CALLBACK': '1_otopi_json', 'PYTHONPATH': '/usr/share/ovirt-hosted-engine-setup/scripts/..:', 'MAIL': '/var/spool/mail/root', 'ANSIBLE_CALLBACK_WHITELIST': '1_otopi_json,2_ovirt_logger', 'XDG_SESSION_ID': '461', 'STY': '9512.pts-0.STORAGE', 'TERMCAP': 'SC|screen|VT 100/ANSI X3.64 virtual terminal:\\\n\t:DO=\\E[%dB:LE=\\E[%dD:RI=\\E[%dC:UP=\\E[%dA:bs:bt=\\E[Z:\\\n\t:cd=\\E[J:ce=\\E[K:cl=\\E[H\\E[J:cm=\\E[%i%d;%dH:ct=\\E[3g:\\\n\t:do=^J:nd=\\E[C:pt:rc=\\E8:rs=\\Ec:sc=\\E7:st=\\EH:up=\\EM:\\\n\t:le=^H:bl=^G:cr=^M:it#8:ho=\\E[H:nw=\\EE:ta=^I:is=\\E)0:\\\n\t:li#63:co#237:am:xn:xv:LP:sr=\\EM:al=\\E[L:AL=\\E[%dL:\\\n\t:cs=\\E[%i%d;%dr:dl=\\E[M:DL=\\E[%dM:dc=\\E[P:DC=\\E[%dP:\\\n\t:im=\\E[4h:ei=\\E[4l:mi:IC=\\E[%d@:ks=\\E[?1h\\E=:\\\n\t:ke=\\E[?1l\\E>:vi=\\E[?25l:ve=\\E[34h\\E[?25h:vs=\\E[34l:\\\n\t:ti=\\E[?1049h:te=\\E[?1049l:us=\\E[4m:ue=\\E[24m:so=\\E[3m:\\\n\t:se=\\E[23m:mb=\\E[5m:md=\\E[1m:mr=\\E[7m:me=\\E[m:ms:\\\n\t:Co#8:pa#64:AF=\\E[3%dm:AB=\\E[4%dm:op=\\E[39;49m:AX:\\\n\t:vb=\\Eg:G0:as=\\E(0:ae=\\E(B:\\\n\t:ac=\\140\\140aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~..--++,,hhII00:\\\n\t:po=\\E[5i:pf=\\E[4i:Km=\\E[M:k0=\\E[10~:k1=\\EOP:k2=\\EOQ:\\\n\t:k3=\\EOR:k4=\\EOS:k5=\\E[15~:k6=\\E[17~:k7=\\E[18~:\\\n\t:k8=\\E[19~:k9=\\E[20~:k;=\\E[21~:F1=\\E[23~:F2=\\E[24~:\\\n\t:F3=\\E[1;2P:F4=\\E[1;2Q:F5=\\E[1;2R:F6=\\E[1;2S:\\\n\t:F7=\\E[15;2~:F8=\\E[17;2~:F9=\\E[18;2~:FA=\\E[19;2~:kb=\x7f:\\\n\t:K2=\\EOE:kB=\\E[Z:kF=\\E[1;2B:kR=\\E[1;2A:*4=\\E[3;2~:\\\n\t:*7=\\E[1;2F:#2=\\E[1;2H:#3=\\E[2;2~:#4=\\E[1;2D:%c=\\E[6;2~:\\\n\t:%e=\\E[5;2~:%i=\\E[1;2C:kh=\\E[1~:@1=\\E[1~:kH=\\E[4~:\\\n\t:@7=\\E[4~:kN=\\E[6~:kP=\\E[5~:kI=\\E[2~:kD=\\E[3~:ku=\\EOA:\\\n\t:kd=\\EOB:kr=\\EOC:kl=\\EOD:km:', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:', 'GUESTFISH_OUTPUT': '\\e[0m', 'SSH_TTY': '/dev/pts/0', 'HOSTNAME': 'STORAGE', 'HISTCONTROL': 'ignoredups', 'WINDOW': '0', 'OTOPI_LOGFILE': '/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log', 'SSH_CONNECTION': 'X.X.X.X 1629 X.X.X.X 22', 'OTOPI_EXECDIR': '/root'} 2018-05-01 19:05:35,529+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY [Clean temporary resources] 2018-05-01 19:05:35,630+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Gathering Facts] 2018-05-01 19:05:37,334+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 19:05:37,836+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [include_tasks] 2018-05-01 19:05:38,138+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 19:05:38,440+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Remove local vm dir] 2018-05-01 19:05:39,643+0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-05-01 19:05:40,145+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-05-01 19:05:40,447+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 rm_localvm_dir: {'failed': False, u'changed': False, u'path': None} 2018-05-01 19:05:40,747+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180 ansible-playbook rc: 0 2018-05-01 19:05:40,748+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [localhost] : ok: 4 changed: 0 unreachable: 0 skipped: 0 failed: 0 2018-05-01 19:05:40,748+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187 ansible-playbook stdout: 2018-05-01 19:05:40,748+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190 ansible-playbook stderr: 2018-05-01 19:05:40,748+0800 DEBUG otopi.plugins.gr_he_ansiblesetup.core.misc misc._cleanup:248 {} 2018-05-01 19:05:40,750+0800 DEBUG otopi.context context._executeMethod:128 Stage cleanup METHOD otopi.plugins.gr_he_common.engine.ca.Plugin._cleanup 2018-05-01 19:05:40,750+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 19:05:40,752+0800 DEBUG otopi.context context._executeMethod:128 Stage cleanup METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._cleanup 2018-05-01 19:05:40,752+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 19:05:40,753+0800 DEBUG otopi.context context._executeMethod:128 Stage cleanup METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._cleanup 2018-05-01 19:05:40,754+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 19:05:40,755+0800 DEBUG otopi.context context._executeMethod:128 Stage cleanup METHOD otopi.plugins.otopi.dialog.answer_file.Plugin._generate_answer_file 2018-05-01 19:05:40,757+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 19:05:40,757+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/answerFileContent=str:'# OTOPI answer file, generated by human dialog [environment:default] QUESTION/1/CI_VM_ETC_HOST=str:no QUESTION/1/CI_ROOT_SSH_ACCESS=str:yes QUESTION/2/CI_ROOT_PASSWORD=str:**FILTERED** QUESTION/1/CLOUDINIT_VM_STATIC_IP_ADDRESS=str:X.X.X.X QUESTION/1/OVEHOSTED_GATEWAY=str:X.X.X.X QUESTION/1/CI_ROOT_SSH_PUBKEY=str: QUESTION/2/ENGINE_ADMIN_PASSWORD=str:**FILTERED** QUESTION/1/CI_DNS=str:X.X.X.X8,X.X.X.X7 QUESTION/1/CI_VM_STATIC_NETWORKING=str:static QUESTION/1/ENGINE_ADMIN_PASSWORD=str:**FILTERED** QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpPort=str:25 QUESTION/1/CI_INSTANCE_HOSTNAME=str:ovirt-engine-2.xxxx.net QUESTION/1/OVEHOSTED_VMENV_OVF_ANSIBLE=str: QUESTION/1/DIALOGOVEHOSTED_NOTIF/sourceEmail=str:support at xxxx.net QUESTION/1/ovehosted_bridge_if=str:bond0 QUESTION/1/ovehosted_vmenv_mac=str:00:16:3e:22:82:01 QUESTION/1/ovehosted_vmenv_cpu=str:4 QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpServer=str:localhost QUESTION/1/CI_ROOT_PASSWORD=str:**FILTERED** QUESTION/1/DEPLOY_PROCEED=str:yes QUESTION/1/ovehosted_vmenv_mem=str:4096 QUESTION/1/CI_INSTANCE_DOMAINNAME=str:xxxx.net QUESTION/1/DIALOGOVEHOSTED_NOTIF/destEmail=str:support at xxxx.net ' 2018-05-01 19:05:40,759+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 19:05:40,762+0800 DEBUG otopi.context context._executeMethod:128 Stage cleanup METHOD otopi.plugins.gr_he_common.core.answerfile.Plugin._save_answers_at_cleanup 2018-05-01 19:05:40,763+0800 INFO otopi.plugins.gr_he_common.core.answerfile answerfile._save_answers:72 Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20180501190540.conf' 2018-05-01 19:05:40,768+0800 INFO otopi.context context.runSequence:741 Stage: Pre-termination 2018-05-01 19:05:40,769+0800 DEBUG otopi.context context.runSequence:745 STAGE pre-terminate 2018-05-01 19:05:40,771+0800 DEBUG otopi.context context._executeMethod:128 Stage pre-terminate METHOD otopi.plugins.otopi.core.misc.Plugin._preTerminate 2018-05-01 19:05:40,772+0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-05-01 19:05:40,772+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/aborted=bool:'False' 2018-05-01 19:05:40,772+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/debug=int:'0' 2018-05-01 19:05:40,773+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-05-01 19:05:40,773+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(, RuntimeError('Failed executing ansible-playbook',), )]' 2018-05-01 19:05:40,774+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/executionDirectory=str:'/root' 2018-05-01 19:05:40,774+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exitCode=list:'[{'priority': 90001, 'code': 0}]' 2018-05-01 19:05:40,774+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/log=bool:'True' 2018-05-01 19:05:40,775+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/pluginGroups=str:'otopi:gr-he-common:gr-he-ansiblesetup' 2018-05-01 19:05:40,775+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/pluginPath=str:'/usr/share/otopi/plugins:/usr/share/ovirt-hosted-engine-setup/scripts/../plugins' 2018-05-01 19:05:40,775+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/suppressEnvironmentKeys=list:'[]' 2018-05-01 19:05:40,776+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/chkconfig=str:'/usr/sbin/chkconfig' 2018-05-01 19:05:40,776+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/chronyc=str:'/usr/bin/chronyc' 2018-05-01 19:05:40,777+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/date=str:'/usr/bin/date' 2018-05-01 19:05:40,777+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/dig=str:'/usr/bin/dig' 2018-05-01 19:05:40,777+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/firewall-cmd=str:'/usr/bin/firewall-cmd' 2018-05-01 19:05:40,778+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/genisoimage=str:'/usr/bin/genisoimage' 2018-05-01 19:05:40,778+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/hwclock=str:'/usr/sbin/hwclock' 2018-05-01 19:05:40,778+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/initctl=NoneType:'None' 2018-05-01 19:05:40,779+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ip=str:'/usr/sbin/ip' 2018-05-01 19:05:40,779+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ntpq=str:'/usr/sbin/ntpq' 2018-05-01 19:05:40,779+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ping=str:'/usr/bin/ping' 2018-05-01 19:05:40,780+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/rc=NoneType:'None' 2018-05-01 19:05:40,780+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/rc-update=NoneType:'None' 2018-05-01 19:05:40,780+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/reboot=str:'/usr/sbin/reboot' 2018-05-01 19:05:40,781+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/service=str:'/usr/sbin/service' 2018-05-01 19:05:40,781+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/ssh-keygen=str:'/usr/bin/ssh-keygen' 2018-05-01 19:05:40,782+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV COMMAND/systemctl=str:'/usr/bin/systemctl' 2018-05-01 19:05:40,782+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/configFileAppend=NoneType:'None' 2018-05-01 19:05:40,782+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/configFileName=str:'/etc/ovirt-hosted-engine-setup.conf' 2018-05-01 19:05:40,783+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True' 2018-05-01 19:05:40,783+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/internalPackageTransaction=Transaction:'transaction' 2018-05-01 19:05:40,783+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logDir=str:'/var/log/ovirt-hosted-engine-setup' 2018-05-01 19:05:40,784+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFileHandle=file:'' 2018-05-01 19:05:40,784+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFileName=str:'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log' 2018-05-01 19:05:40,784+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFileNamePrefix=str:'ovirt-hosted-engine-setup' 2018-05-01 19:05:40,785+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFilter=_MyLoggerFilter:'filter' 2018-05-01 19:05:40,785+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFilterKeys=list:'['OVEHOSTED_ENGINE/adminPassword', 'OVEHOSTED_VM/cloudinitRootPwd']' 2018-05-01 19:05:40,785+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logFilterRe=list:'[<_sre.SRE_Pattern object at 0x22e9880>]' 2018-05-01 19:05:40,786+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/logRemoveAtExit=bool:'False' 2018-05-01 19:05:40,786+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/mainTransaction=Transaction:'transaction' 2018-05-01 19:05:40,787+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/modifiedFiles=list:'[]' 2018-05-01 19:05:40,787+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV CORE/randomizeEvents=bool:'False' 2018-05-01 19:05:40,787+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/answerFile=NoneType:'None' 2018-05-01 19:05:40,788+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/answerFileContent=str:'# OTOPI answer file, generated by human dialog [environment:default] QUESTION/1/CI_VM_ETC_HOST=str:no QUESTION/1/CI_ROOT_SSH_ACCESS=str:yes QUESTION/2/CI_ROOT_PASSWORD=str:**FILTERED** QUESTION/1/CLOUDINIT_VM_STATIC_IP_ADDRESS=str:X.X.X.X QUESTION/1/OVEHOSTED_GATEWAY=str:X.X.X.X QUESTION/1/CI_ROOT_SSH_PUBKEY=str: QUESTION/2/ENGINE_ADMIN_PASSWORD=str:**FILTERED** QUESTION/1/CI_DNS=str:X.X.X.X8,X.X.X.X7 QUESTION/1/CI_VM_STATIC_NETWORKING=str:static QUESTION/1/ENGINE_ADMIN_PASSWORD=str:**FILTERED** QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpPort=str:25 QUESTION/1/CI_INSTANCE_HOSTNAME=str:ovirt-engine-2.xxxx.net QUESTION/1/OVEHOSTED_VMENV_OVF_ANSIBLE=str: QUESTION/1/DIALOGOVEHOSTED_NOTIF/sourceEmail=str:support at xxxx.net QUESTION/1/ovehosted_bridge_if=str:bond0 QUESTION/1/ovehosted_vmenv_mac=str:00:16:3e:22:82:01 QUESTION/1/ovehosted_vmenv_cpu=str:4 QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpServer=str:localhost QUESTION/1/CI_ROOT_PASSWORD=str:**FILTERED** QUESTION/1/DEPLOY_PROCEED=str:yes QUESTION/1/ovehosted_vmenv_mem=str:4096 QUESTION/1/CI_INSTANCE_DOMAINNAME=str:xxxx.net QUESTION/1/DIALOGOVEHOSTED_NOTIF/destEmail=str:support at xxxx.net ' 2018-05-01 19:05:40,788+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/autoAcceptDefault=bool:'False' 2018-05-01 19:05:40,789+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/boundary=str:'--=451b80dc-996f-432e-9e4f-2b29ef6d1141=--' 2018-05-01 19:05:40,789+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/cliVersion=int:'1' 2018-05-01 19:05:40,789+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/customization=bool:'False' 2018-05-01 19:05:40,790+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/dialect=str:'human' 2018-05-01 19:05:40,790+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV INFO/PACKAGE_NAME=str:'otopi' 2018-05-01 19:05:40,790+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV INFO/PACKAGE_VERSION=str:'1.7.7' 2018-05-01 19:05:40,791+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/firewalldAvailable=bool:'True' 2018-05-01 19:05:40,791+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/firewalldDisableServices=list:'[]' 2018-05-01 19:05:40,791+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/firewalldEnable=bool:'False' 2018-05-01 19:05:40,792+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/iptablesEnable=bool:'False' 2018-05-01 19:05:40,792+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/iptablesRules=NoneType:'None' 2018-05-01 19:05:40,792+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/sshEnable=bool:'False' 2018-05-01 19:05:40,793+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/sshKey=NoneType:'None' 2018-05-01 19:05:40,793+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV NETWORK/sshUser=str:'' 2018-05-01 19:05:40,793+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/ansibleDeployment=bool:'True' 2018-05-01 19:05:40,794+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/checkRequirements=bool:'True' 2018-05-01 19:05:40,794+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/deployProceed=bool:'True' 2018-05-01 19:05:40,795+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/etcAnswerFile=str:'/etc/ovirt-hosted-engine/answers.conf' 2018-05-01 19:05:40,795+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/localVMDir=NoneType:'None' 2018-05-01 19:05:40,795+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/miscReached=bool:'True' 2018-05-01 19:05:40,795+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/nodeSetup=bool:'False' 2018-05-01 19:05:40,796+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/rollbackProceed=NoneType:'None' 2018-05-01 19:05:40,796+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/rollbackUpgrade=bool:'False' 2018-05-01 19:05:40,796+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/screenProceed=NoneType:'None' 2018-05-01 19:05:40,796+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/skipTTYCheck=bool:'False' 2018-05-01 19:05:40,796+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/tempDir=str:'/var/tmp' 2018-05-01 19:05:40,796+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/upgradeProceed=NoneType:'None' 2018-05-01 19:05:40,796+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/upgradingAppliance=bool:'False' 2018-05-01 19:05:40,796+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_CORE/userAnswerFile=NoneType:'None' 2018-05-01 19:05:40,797+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/adminPassword=str:'**FILTERED**' 2018-05-01 19:05:40,797+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/adminUsername=str:'admin at internal' 2018-05-01 19:05:40,797+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/appHostName=str:'STORAGE' 2018-05-01 19:05:40,797+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/clusterName=str:'Default' 2018-05-01 19:05:40,797+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/enableHcGlusterService=NoneType:'None' 2018-05-01 19:05:40,797+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/enableLibgfapi=NoneType:'None' 2018-05-01 19:05:40,797+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/engineSetupTimeout=int:'1800' 2018-05-01 19:05:40,797+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/insecureSSL=NoneType:'None' 2018-05-01 19:05:40,798+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/interactiveAdminPassword=bool:'True' 2018-05-01 19:05:40,798+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_ENGINE/temporaryCertificate=NoneType:'None' 2018-05-01 19:05:40,798+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_FIRST_HOST/deployWithHE35Hosts=NoneType:'None' 2018-05-01 19:05:40,798+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_FIRST_HOST/skipSharedStorageAF=bool:'False' 2018-05-01 19:05:40,798+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/allowInvalidBondModes=bool:'False' 2018-05-01 19:05:40,798+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/bridgeIf=str:'bond0' 2018-05-01 19:05:40,798+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/bridgeName=str:'ovirtmgmt' 2018-05-01 19:05:40,798+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/fqdn=str:'ovirt-engine-2.xxxx.net' 2018-05-01 19:05:40,798+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/fqdnReverseValidation=bool:'False' 2018-05-01 19:05:40,799+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/gateway=str:'X.X.X.X' 2018-05-01 19:05:40,799+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/host_name=str:'STORAGE' 2018-05-01 19:05:40,799+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/refuseDeployingWithNM=bool:'False' 2018-05-01 19:05:40,799+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/destEmail=str:'support at xxxx.net' 2018-05-01 19:05:40,799+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/smtpPort=str:'25' 2018-05-01 19:05:40,799+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/smtpServer=str:'localhost' 2018-05-01 19:05:40,799+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_NOTIF/sourceEmail=str:'support at xxxx.net' 2018-05-01 19:05:40,799+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_SANLOCK/lockspaceName=str:'hosted-engine' 2018-05-01 19:05:40,799+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_SANLOCK/serviceName=str:'sanlock' 2018-05-01 19:05:40,800+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/LunID=NoneType:'None' 2018-05-01 19:05:40,800+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/blockDeviceSizeGB=NoneType:'None' 2018-05-01 19:05:40,800+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/domainType=NoneType:'None' 2018-05-01 19:05:40,800+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIDiscoverPassword=NoneType:'None' 2018-05-01 19:05:40,800+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIDiscoverUser=NoneType:'None' 2018-05-01 19:05:40,800+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortal=NoneType:'None' 2018-05-01 19:05:40,800+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortalIPAddress=NoneType:'None' 2018-05-01 19:05:40,800+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortalPassword=NoneType:'None' 2018-05-01 19:05:40,801+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortalPort=NoneType:'None' 2018-05-01 19:05:40,801+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSIPortalUser=NoneType:'None' 2018-05-01 19:05:40,801+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/iSCSITargetName=NoneType:'None' 2018-05-01 19:05:40,801+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/imgDesc=str:'Hosted Engine Image' 2018-05-01 19:05:40,801+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/imgSizeGB=NoneType:'None' 2018-05-01 19:05:40,801+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/imgUUID=str:'a8d46c38-af05-4a19-afd9-11e45cb26942' 2018-05-01 19:05:40,801+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/lockspaceImageUUID=NoneType:'None' 2018-05-01 19:05:40,801+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/lockspaceVolumeUUID=NoneType:'None' 2018-05-01 19:05:40,801+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/metadataImageUUID=NoneType:'None' 2018-05-01 19:05:40,802+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/metadataVolumeUUID=NoneType:'None' 2018-05-01 19:05:40,802+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/mntOptions=NoneType:'None' 2018-05-01 19:05:40,802+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/nfsVersion=NoneType:'None' 2018-05-01 19:05:40,802+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/ovfSizeGB=NoneType:'None' 2018-05-01 19:05:40,802+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/storageDomainConnection=NoneType:'None' 2018-05-01 19:05:40,802+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/storageDomainName=str:'hosted_storage' 2018-05-01 19:05:40,802+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_STORAGE/volUUID=str:'4de5c53d-9cd6-4897-b6a4-6ec2c8bbffb3' 2018-05-01 19:05:40,802+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/backupFileName=NoneType:'None' 2018-05-01 19:05:40,803+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/backupImgSizeGB=NoneType:'None' 2018-05-01 19:05:40,803+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/backupImgUUID=NoneType:'None' 2018-05-01 19:05:40,803+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/backupVolUUID=NoneType:'None' 2018-05-01 19:05:40,803+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/createLMVolumes=bool:'False' 2018-05-01 19:05:40,803+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_UPGRADE/dstBackupFileName=NoneType:'None' 2018-05-01 19:05:40,803+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/kvmGid=int:'36' 2018-05-01 19:05:40,803+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/serviceName=str:'vdsmd' 2018-05-01 19:05:40,803+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/useSSL=bool:'True' 2018-05-01 19:05:40,803+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/vdscli=NoneType:'None' 2018-05-01 19:05:40,804+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VDSM/vdsmUid=int:'36' 2018-05-01 19:05:40,804+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/acceptDownloadEApplianceRPM=NoneType:'None' 2018-05-01 19:05:40,804+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/applianceMem=int:'16384' 2018-05-01 19:05:40,804+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/applianceVCpus=str:'4' 2018-05-01 19:05:40,804+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/applianceVersion=NoneType:'None' 2018-05-01 19:05:40,804+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/automateVMShutdown=bool:'True' 2018-05-01 19:05:40,804+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cdromUUID=str:'f90cef5e-1d7d-47ae-b3f0-34cabe5a9ff3' 2018-05-01 19:05:40,804+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudInitISO=str:'generate' 2018-05-01 19:05:40,805+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitExecuteEngineSetup=bool:'True' 2018-05-01 19:05:40,805+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitHostIP=str:'X.X.X.X' 2018-05-01 19:05:40,805+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitInstanceDomainName=str:'xxxx.net' 2018-05-01 19:05:40,805+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitInstanceHostName=str:'ovirt-engine-2.xxxx.net' 2018-05-01 19:05:40,805+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitRootPwd=str:'**FILTERED**' 2018-05-01 19:05:40,805+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMDNS=str:'X.X.X.X8,X.X.X.X7' 2018-05-01 19:05:40,805+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMETCHOSTS=bool:'False' 2018-05-01 19:05:40,805+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMStaticCIDR=str:'X.X.X.X/27' 2018-05-01 19:05:40,805+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/cloudinitVMTZ=str:'Asia/Hong_Kong' 2018-05-01 19:05:40,806+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/consoleUUID=str:'61d2a2c6-bf6c-441b-b1a9-b77c8e4fc814' 2018-05-01 19:05:40,806+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/emulatedMachine=str:'pc' 2018-05-01 19:05:40,806+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/localVmUUID=str:'72921200-b111-4515-96a0-19524dd65141' 2018-05-01 19:05:40,806+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/maxVCpus=str:'8' 2018-05-01 19:05:40,806+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/nicUUID=str:'1b2fd7c7-0200-4bbf-afa8-2337aa0dfac8' 2018-05-01 19:05:40,806+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/ovfArchive=str:'' 2018-05-01 19:05:40,806+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/rootSshAccess=str:'yes' 2018-05-01 19:05:40,806+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/rootSshPubkey=str:'' 2018-05-01 19:05:40,807+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmCDRom=NoneType:'None' 2018-05-01 19:05:40,807+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmMACAddr=str:'00:16:3e:22:82:01' 2018-05-01 19:05:40,807+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmMemSizeMB=int:'4096' 2018-05-01 19:05:40,807+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV OVEHOSTED_VM/vmVCpus=str:'4' 2018-05-01 19:05:40,807+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/dnfDisabledPlugins=list:'[]' 2018-05-01 19:05:40,807+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/dnfExpireCache=bool:'True' 2018-05-01 19:05:40,807+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/dnfRollback=bool:'True' 2018-05-01 19:05:40,807+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/dnfpackagerEnabled=bool:'True' 2018-05-01 19:05:40,807+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/keepAliveInterval=int:'30' 2018-05-01 19:05:40,808+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumDisabledPlugins=list:'[]' 2018-05-01 19:05:40,808+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumEnabledPlugins=list:'[]' 2018-05-01 19:05:40,808+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumExpireCache=bool:'True' 2018-05-01 19:05:40,808+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumRollback=bool:'True' 2018-05-01 19:05:40,808+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV PACKAGER/yumpackagerEnabled=bool:'True' 2018-05-01 19:05:40,808+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_DNS=str:'X.X.X.X8,X.X.X.X7' 2018-05-01 19:05:40,808+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_INSTANCE_DOMAINNAME=str:'xxxx.net' 2018-05-01 19:05:40,808+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_INSTANCE_HOSTNAME=str:'ovirt-engine-2.xxxx.net' 2018-05-01 19:05:40,809+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_ROOT_PASSWORD=str:'**FILTERED**' 2018-05-01 19:05:40,809+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_ROOT_SSH_ACCESS=str:'yes' 2018-05-01 19:05:40,809+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_ROOT_SSH_PUBKEY=str:'' 2018-05-01 19:05:40,809+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_VM_ETC_HOST=str:'no' 2018-05-01 19:05:40,809+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CI_VM_STATIC_NETWORKING=str:'static' 2018-05-01 19:05:40,809+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/CLOUDINIT_VM_STATIC_IP_ADDRESS=str:'X.X.X.X' 2018-05-01 19:05:40,809+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DEPLOY_PROCEED=str:'yes' 2018-05-01 19:05:40,809+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DIALOGOVEHOSTED_NOTIF/destEmail=str:'support at xxxx.net' 2018-05-01 19:05:40,809+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpPort=str:'25' 2018-05-01 19:05:40,810+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpServer=str:'localhost' 2018-05-01 19:05:40,810+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/DIALOGOVEHOSTED_NOTIF/sourceEmail=str:'support at xxxx.net' 2018-05-01 19:05:40,810+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ENGINE_ADMIN_PASSWORD=str:'**FILTERED**' 2018-05-01 19:05:40,810+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/OVEHOSTED_GATEWAY=str:'X.X.X.X' 2018-05-01 19:05:40,810+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/OVEHOSTED_VMENV_OVF_ANSIBLE=str:'' 2018-05-01 19:05:40,810+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_bridge_if=str:'bond0' 2018-05-01 19:05:40,810+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_vmenv_cpu=str:'4' 2018-05-01 19:05:40,810+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_vmenv_mac=str:'00:16:3e:22:82:01' 2018-05-01 19:05:40,811+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_vmenv_mem=str:'4096' 2018-05-01 19:05:40,811+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/2/CI_ROOT_PASSWORD=str:'**FILTERED**' 2018-05-01 19:05:40,811+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV QUESTION/2/ENGINE_ADMIN_PASSWORD=str:'**FILTERED**' 2018-05-01 19:05:40,811+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/clockMaxGap=int:'5' 2018-05-01 19:05:40,811+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/clockSet=bool:'False' 2018-05-01 19:05:40,811+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/commandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/Arcconf:/root/bin:/usr/Arcconf' 2018-05-01 19:05:40,811+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/reboot=bool:'False' 2018-05-01 19:05:40,811+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/rebootAllow=bool:'True' 2018-05-01 19:05:40,812+0800 DEBUG otopi.context context.dumpEnvironment:869 ENV SYSTEM/rebootDeferTime=int:'10' 2018-05-01 19:05:40,812+0800 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-05-01 19:05:40,813+0800 DEBUG otopi.context context._executeMethod:128 Stage pre-terminate METHOD otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate 2018-05-01 19:05:40,813+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 19:05:40,814+0800 INFO otopi.context context.runSequence:741 Stage: Termination 2018-05-01 19:05:40,815+0800 DEBUG otopi.context context.runSequence:745 STAGE terminate 2018-05-01 19:05:40,815+0800 DEBUG otopi.context context._executeMethod:128 Stage terminate METHOD otopi.plugins.gr_he_common.core.misc.Plugin._terminate 2018-05-01 19:05:40,816+0800 ERROR otopi.plugins.gr_he_common.core.misc misc._terminate:240 Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch. 2018-05-01 19:05:40,816+0800 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log 2018-05-01 19:05:40,818+0800 DEBUG otopi.context context._executeMethod:128 Stage terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate 2018-05-01 19:05:40,823+0800 DEBUG otopi.context context._executeMethod:128 Stage terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate 2018-05-01 19:05:40,824+0800 DEBUG otopi.context context._executeMethod:135 condition False 2018-05-01 19:05:40,828+0800 DEBUG otopi.context context._executeMethod:128 Stage terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate From callum at well.ox.ac.uk Tue May 1 13:11:57 2018 From: callum at well.ox.ac.uk (Callum Smith) Date: Tue, 1 May 2018 13:11:57 +0000 Subject: [ovirt-users] Re-attaching ISOs and moving ISOs storage Message-ID: Dear All, It appears that clicking "detach" on the ISO storage domain is a really bad idea. This has gotten half way through the procedure and now can't be recovered from. Is there any advice for re-attaching the ISO storage domain manually? An NFS mount didn't add it back to the "pool" unfortunately. On a separate note, is it possible to migrate this storage to a new location? And if so how. Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From phbailey at redhat.com Tue May 1 13:59:49 2018 From: phbailey at redhat.com (Phillip Bailey) Date: Tue, 1 May 2018 09:59:49 -0400 Subject: [ovirt-users] hosted-engine --deploy Failed In-Reply-To: References: Message-ID: Hi Paul, I'm sorry to hear that you're having trouble with the hosted engine deployment process. I looked through the log, but wasn't able to find anything to explain why setup failed to bring up the VM. To clarify, did both of your deployment attempts fail at the same place in the process with the same error message? Also, did you use the same storage device for the storage domain in both attempts? Our resident expert is out today, but should be back tomorrow. I've copied him on this message, so he can follow up once he's back if no one else has been able to help before then. As far as the documentation goes, there have been major changes to the deployment process over the last few months and unfortunately, the documentation simply hasn't caught up and for that, I apologize. It should be updated in the near future to reflect the changes that have been made. Best regards, -Phillip Bailey On Tue, May 1, 2018 at 7:43 AM, Paul.LKW wrote: > Dear All: > Recently I just make a try to create a Self-Hosted Engine oVirt but > unfortunately both two of my box also failed with bad deployment > experience, first of all the online documentation is wrong under "oVirt > Self-Hosted Engine Guide" section, it says the deployment script > "hosted-engine --deploy" will asking for Storage configuration immediately > but it is not true any more, also on both of my 2 box one is configured > with bond interface and one not but also failed, in order to separate the > issue I think better I post the bonded interface one log for all your to > Ref. first, the script runs at "TASK [Wait for the host to be up]" for long > long time then give me the Error > > [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": > []}, "attempts": 120, "changed": false} > [ INFO ] TASK [include_tasks] > [ INFO ] ok: [localhost] > [ INFO ] TASK [Remove local vm dir] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Notify the user about a failure] > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The > system may not be provisioned according to the playbook results: please > check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} > [ ERROR ] [DEPRECATION WARNING]: Using tests as filters is deprecated. > Instead of using > [ ERROR ] `result|succeeded` instead use `result is succeeded`. This > feature will be > [ ERROR ] removed in version 2.9. Deprecation warnings can be disabled by > setting > [ ERROR ] deprecation_warnings=False in ansible.cfg. > [ ERROR ] [DEPRECATION WARNING]: Using tests as filters is deprecated. > Instead of using > [ ERROR ] `result|succeeded` instead use `result is succeeded`. This > feature will be > [ ERROR ] removed in version 2.9. Deprecation warnings can be disabled by > setting > [ ERROR ] deprecation_warnings=False in ansible.cfg. > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > [ INFO ] Stage: Clean up > [ INFO ] Cleaning temporary resources > [ INFO ] TASK [Gathering Facts] > [ INFO ] ok: [localhost] > [ INFO ] TASK [include_tasks] > [ INFO ] ok: [localhost] > [ INFO ] TASK [Remove local vm dir] > [ INFO ] ok: [localhost] > [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine- > setup/answers/answers-20180501190540.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Hosted Engine deployment failed: please check the logs for the > issue, fix accordingly or re-deploy from scratch. > Log file is located at /var/log/ovirt-hosted-engine-s > etup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log > > Attached is the so long LOG. > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdito at domeyard.com Tue May 1 14:05:01 2018 From: jdito at domeyard.com (Joe DiTommasso) Date: Tue, 01 May 2018 14:05:01 +0000 Subject: [ovirt-users] hosted-engine --deploy Failed In-Reply-To: References: Message-ID: I was having the same issue. I've blown away my logs, but VDSM was failing to start up after attempting to configure vswitch filtering. Have switched to oVirt Node, which seems to be working. On Tue, May 1, 2018, 10:00 Phillip Bailey wrote: > Hi Paul, > > I'm sorry to hear that you're having trouble with the hosted engine > deployment process. I looked through the log, but wasn't able to find > anything to explain why setup failed to bring up the VM. To clarify, did > both of your deployment attempts fail at the same place in the process with > the same error message? Also, did you use the same storage device for the > storage domain in both attempts? > > Our resident expert is out today, but should be back tomorrow. I've copied > him on this message, so he can follow up once he's back if no one else has > been able to help before then. > > As far as the documentation goes, there have been major changes to the > deployment process over the last few months and unfortunately, the > documentation simply hasn't caught up and for that, I apologize. It should > be updated in the near future to reflect the changes that have been made. > > Best regards, > > -Phillip Bailey > > On Tue, May 1, 2018 at 7:43 AM, Paul.LKW wrote: > >> Dear All: >> Recently I just make a try to create a Self-Hosted Engine oVirt but >> unfortunately both two of my box also failed with bad deployment >> experience, first of all the online documentation is wrong under "oVirt >> Self-Hosted Engine Guide" section, it says the deployment script >> "hosted-engine --deploy" will asking for Storage configuration immediately >> but it is not true any more, also on both of my 2 box one is configured >> with bond interface and one not but also failed, in order to separate the >> issue I think better I post the bonded interface one log for all your to >> Ref. first, the script runs at "TASK [Wait for the host to be up]" for long >> long time then give me the Error >> >> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": >> {"ovirt_hosts": []}, "attempts": 120, "changed": false} >> [ INFO ] TASK [include_tasks] >> [ INFO ] ok: [localhost] >> [ INFO ] TASK [Remove local vm dir] >> [ INFO ] changed: [localhost] >> [ INFO ] TASK [Notify the user about a failure] >> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The >> system may not be provisioned according to the playbook results: please >> check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} >> [ ERROR ] [DEPRECATION WARNING]: Using tests as filters is deprecated. >> Instead of using >> [ ERROR ] `result|succeeded` instead use `result is succeeded`. This >> feature will be >> [ ERROR ] removed in version 2.9. Deprecation warnings can be disabled by >> setting >> [ ERROR ] deprecation_warnings=False in ansible.cfg. >> [ ERROR ] [DEPRECATION WARNING]: Using tests as filters is deprecated. >> Instead of using >> [ ERROR ] `result|succeeded` instead use `result is succeeded`. This >> feature will be >> [ ERROR ] removed in version 2.9. Deprecation warnings can be disabled by >> setting >> [ ERROR ] deprecation_warnings=False in ansible.cfg. >> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >> ansible-playbook >> [ INFO ] Stage: Clean up >> [ INFO ] Cleaning temporary resources >> [ INFO ] TASK [Gathering Facts] >> [ INFO ] ok: [localhost] >> [ INFO ] TASK [include_tasks] >> [ INFO ] ok: [localhost] >> [ INFO ] TASK [Remove local vm dir] >> [ INFO ] ok: [localhost] >> [ INFO ] Generating answer file >> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20180501190540.conf' >> [ INFO ] Stage: Pre-termination >> [ INFO ] Stage: Termination >> [ ERROR ] Hosted Engine deployment failed: please check the logs for the >> issue, fix accordingly or re-deploy from scratch. >> Log file is located at >> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log >> >> Attached is the so long LOG. >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Tue May 1 14:18:36 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Tue, 1 May 2018 17:18:36 +0300 Subject: [ovirt-users] hosted-engine --deploy Failed In-Reply-To: References: Message-ID: Can you provide other logs, vdsm log, for example? The installation of the Engine seem to have succeeded, then it tried to add the host. It was waiting for quite some time to get the host into 'Up' state. If, for example, it was installing packages, and the yum repo was very slow, that may be a reason. But there may be many other reasons as well. Y. On Tue, May 1, 2018 at 2:43 PM, Paul.LKW wrote: > Dear All: > Recently I just make a try to create a Self-Hosted Engine oVirt but > unfortunately both two of my box also failed with bad deployment > experience, first of all the online documentation is wrong under "oVirt > Self-Hosted Engine Guide" section, it says the deployment script > "hosted-engine --deploy" will asking for Storage configuration immediately > but it is not true any more, also on both of my 2 box one is configured > with bond interface and one not but also failed, in order to separate the > issue I think better I post the bonded interface one log for all your to > Ref. first, the script runs at "TASK [Wait for the host to be up]" for long > long time then give me the Error > > [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": > []}, "attempts": 120, "changed": false} > [ INFO ] TASK [include_tasks] > [ INFO ] ok: [localhost] > [ INFO ] TASK [Remove local vm dir] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Notify the user about a failure] > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The > system may not be provisioned according to the playbook results: please > check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} > [ ERROR ] [DEPRECATION WARNING]: Using tests as filters is deprecated. > Instead of using > [ ERROR ] `result|succeeded` instead use `result is succeeded`. This > feature will be > [ ERROR ] removed in version 2.9. Deprecation warnings can be disabled by > setting > [ ERROR ] deprecation_warnings=False in ansible.cfg. > [ ERROR ] [DEPRECATION WARNING]: Using tests as filters is deprecated. > Instead of using > [ ERROR ] `result|succeeded` instead use `result is succeeded`. This > feature will be > [ ERROR ] removed in version 2.9. Deprecation warnings can be disabled by > setting > [ ERROR ] deprecation_warnings=False in ansible.cfg. > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > [ INFO ] Stage: Clean up > [ INFO ] Cleaning temporary resources > [ INFO ] TASK [Gathering Facts] > [ INFO ] ok: [localhost] > [ INFO ] TASK [include_tasks] > [ INFO ] ok: [localhost] > [ INFO ] TASK [Remove local vm dir] > [ INFO ] ok: [localhost] > [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine- > setup/answers/answers-20180501190540.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Hosted Engine deployment failed: please check the logs for the > issue, fix accordingly or re-deploy from scratch. > Log file is located at /var/log/ovirt-hosted-engine-s > etup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log > > Attached is the so long LOG. > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nesretep at chem.byu.edu Tue May 1 15:36:01 2018 From: nesretep at chem.byu.edu (Kristian Petersen) Date: Tue, 1 May 2018 09:36:01 -0600 Subject: [ovirt-users] User portal permissions Message-ID: I have a user that I want to be able to create VMs, create and attach disks to the new VM, add a network interface to the VM, and get the OS installed so it is ready to use. I am completely confused on which role (or roles) this user would need to do that without making them an admin. Can someone give me some guidance here? -- Kristian Petersen System Administrator BYU Dept. of Chemistry and Biochemistry -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdito at domeyard.com Tue May 1 16:08:05 2018 From: jdito at domeyard.com (Joe DiTommasso) Date: Tue, 1 May 2018 12:08:05 -0400 Subject: [ovirt-users] hosted-engine --deploy Failed In-Reply-To: References: Message-ID: I ran through this again to regenerate the logs. It's been 100% repeatable for me on a fresh 7.4 install, running 'hosted-engine --deploy' or the preconfigured storage option in the cockpit oVirt UI. Deploying hyperconverged from the cockpit UI worked, however. Attaching contents of /var/log from hosted-engine and the physical host. This is what I got from 'journalctl -u vdsmd', wasn't reflected in VDSM logs. May 01 11:01:27 sum-glovirt-05.dy.gl systemd[1]: Starting Virtual Desktop Server Manager... May 01 11:01:27 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: vdsm: Running mkdirs May 01 11:01:27 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: vdsm: Running configure_coredump May 01 11:01:27 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: vdsm: Running configure_vdsm_logs May 01 11:01:27 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: vdsm: Running wait_for_network May 01 11:01:27 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: vdsm: Running run_init_hooks May 01 11:01:27 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: vdsm: Running check_is_configured May 01 11:01:27 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: abrt is already configured for vdsm May 01 11:01:27 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: lvm is configured for vdsm May 01 11:01:27 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: libvirt is already configured for vdsm May 01 11:01:27 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: Current revision of multipath.conf detected, preserving May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: schema is already configuredvdsm: Running validate_configuration May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: SUCCESS: ssl configured to true. No conflicts May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: vdsm: Running prepare_transient_repository May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: vdsm: Running syslog_available May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: vdsm: Running nwfilter May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: libvirt: Network Filter Driver error : this function is not supported by the connection driver: virNWFilterLookupByName May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: libvirt: Network Filter Driver error : this function is not supported by the connection driver: virNWFilterDefineXML May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: Traceback (most recent call last): May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: File "/usr/bin/vdsm-tool", line 219, in main May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: return tool_command[cmd]["command"](*args) May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: File "/usr/lib/python2.7/site-packages/vdsm/tool/nwfilter.py", line 40, in main May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: NoMacSpoofingFilter().defineNwFilter(conn) May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: File "/usr/lib/python2.7/site-packages/vdsm/tool/nwfilter.py", line 76, in defineNwFilter May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: nwFilter = conn.nwfilterDefineXML(self.buildFilterXml()) May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: ret = f(*args, **kwargs) May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: return func(inst, *args, **kwargs) May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4279, in nwfilterDefineXML May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: if ret is None:raise libvirtError('virNWFilterDefineXML() failed', conn=self) May 01 11:01:28 sum-glovirt-05.dy.gl vdsmd_init_common.sh[15467]: libvirtError: this function is not supported by the connection driver: virNWFilterDefineXML May 01 11:01:28 sum-glovirt-05.dy.gl systemd[1]: vdsmd.service: control process exited, code=exited status=1 May 01 11:01:28 sum-glovirt-05.dy.gl systemd[1]: Failed to start Virtual Desktop Server Manager. May 01 11:01:28 sum-glovirt-05.dy.gl systemd[1]: Unit vdsmd.service entered failed state. May 01 11:01:28 sum-glovirt-05.dy.gl systemd[1]: vdsmd.service failed. May 01 11:01:29 sum-glovirt-05.dy.gl systemd[1]: vdsmd.service holdoff time over, scheduling restart. May 01 11:01:29 sum-glovirt-05.dy.gl systemd[1]: start request repeated too quickly for vdsmd.service May 01 11:01:29 sum-glovirt-05.dy.gl systemd[1]: Failed to start Virtual Desktop Server Manager. May 01 11:01:29 sum-glovirt-05.dy.gl systemd[1]: Unit vdsmd.service entered failed state. May 01 11:01:29 sum-glovirt-05.dy.gl systemd[1]: vdsmd.service failed. On Tue, May 1, 2018 at 10:18 AM, Yaniv Kaul wrote: > Can you provide other logs, vdsm log, for example? > The installation of the Engine seem to have succeeded, then it tried to > add the host. It was waiting for quite some time to get the host into 'Up' > state. > If, for example, it was installing packages, and the yum repo was very > slow, that may be a reason. > But there may be many other reasons as well. > Y. > > On Tue, May 1, 2018 at 2:43 PM, Paul.LKW wrote: > >> Dear All: >> Recently I just make a try to create a Self-Hosted Engine oVirt but >> unfortunately both two of my box also failed with bad deployment >> experience, first of all the online documentation is wrong under "oVirt >> Self-Hosted Engine Guide" section, it says the deployment script >> "hosted-engine --deploy" will asking for Storage configuration immediately >> but it is not true any more, also on both of my 2 box one is configured >> with bond interface and one not but also failed, in order to separate the >> issue I think better I post the bonded interface one log for all your to >> Ref. first, the script runs at "TASK [Wait for the host to be up]" for long >> long time then give me the Error >> >> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": >> {"ovirt_hosts": []}, "attempts": 120, "changed": false} >> [ INFO ] TASK [include_tasks] >> [ INFO ] ok: [localhost] >> [ INFO ] TASK [Remove local vm dir] >> [ INFO ] changed: [localhost] >> [ INFO ] TASK [Notify the user about a failure] >> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The >> system may not be provisioned according to the playbook results: please >> check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} >> [ ERROR ] [DEPRECATION WARNING]: Using tests as filters is deprecated. >> Instead of using >> [ ERROR ] `result|succeeded` instead use `result is succeeded`. This >> feature will be >> [ ERROR ] removed in version 2.9. Deprecation warnings can be disabled by >> setting >> [ ERROR ] deprecation_warnings=False in ansible.cfg. >> [ ERROR ] [DEPRECATION WARNING]: Using tests as filters is deprecated. >> Instead of using >> [ ERROR ] `result|succeeded` instead use `result is succeeded`. This >> feature will be >> [ ERROR ] removed in version 2.9. Deprecation warnings can be disabled by >> setting >> [ ERROR ] deprecation_warnings=False in ansible.cfg. >> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >> ansible-playbook >> [ INFO ] Stage: Clean up >> [ INFO ] Cleaning temporary resources >> [ INFO ] TASK [Gathering Facts] >> [ INFO ] ok: [localhost] >> [ INFO ] TASK [include_tasks] >> [ INFO ] ok: [localhost] >> [ INFO ] TASK [Remove local vm dir] >> [ INFO ] ok: [localhost] >> [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine- >> setup/answers/answers-20180501190540.conf' >> [ INFO ] Stage: Pre-termination >> [ INFO ] Stage: Termination >> [ ERROR ] Hosted Engine deployment failed: please check the logs for the >> issue, fix accordingly or re-deploy from scratch. >> Log file is located at /var/log/ovirt-hosted-engine-s >> etup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log >> >> Attached is the so long LOG. >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hosted-engine-logs.tgz Type: application/x-compressed-tar Size: 960569 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt-host-logs.tgz Type: application/x-compressed-tar Size: 2045706 bytes Desc: not available URL: From paul.lkw at gmail.com Tue May 1 16:48:30 2018 From: paul.lkw at gmail.com (Paul.LKW) Date: Wed, 2 May 2018 00:48:30 +0800 Subject: [ovirt-users] hosted-engine --deploy Failed In-Reply-To: References: Message-ID: <1d993f12-c999-c470-dcb7-c16fabbd217a@gmail.com> Dear Philip: It is not failed at the same point and I am using the localhost NFS as the Storage currently as for testing purpose. For the bonding interface host it even not going till that point asking to setup Storage but the other does reach that point but still failed at some other point. I will provide another LOG later till this one solved first as it will confusing all other in the mailing list I think, ^_^ Best Regards, Paul.LKW Phillip Bailey ? 1/5/2018 21:59 ??: > Hi Paul, > > I'm sorry to hear that you're having trouble with the hosted engine > deployment process. I looked through the log, but wasn't able to find > anything to explain why setup failed to bring up the VM. To clarify, > did both of your deployment attempts fail at the same place in the > process with the same error message? Also, did you use the same > storage device for the storage domain in both attempts? > > Our resident expert is out today, but should be back tomorrow. I've > copied him on this message, so he can follow up once he's back if no > one else has been able to help before then. > > As far as the documentation goes, there have been major changes to the > deployment process over the last few months and unfortunately, the > documentation simply hasn't caught up and for that, I apologize. It > should be updated in the near future to reflect the changes that have > been made. > > Best regards, > > -Phillip Bailey > > On Tue, May 1, 2018 at 7:43 AM, Paul.LKW > wrote: > > Dear All: > Recently I just make a try to create a Self-Hosted Engine oVirt > but unfortunately both two of my box also failed with bad > deployment experience, first of all the online documentation is > wrong under "oVirt Self-Hosted Engine Guide" section, it says the > deployment script "hosted-engine --deploy" will asking for Storage > configuration immediately but it is not true any more, also on > both of my 2 box one is configured with bond interface and one not > but also failed, in order to separate the issue I think better I > post the bonded interface one log for all your to Ref. first, the > script runs at "TASK [Wait for the host to be up]" for long long > time then give me the Error > > [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": > {"ovirt_hosts": []}, "attempts": 120, "changed": false} > [ INFO? ] TASK [include_tasks] > [ INFO? ] ok: [localhost] > [ INFO? ] TASK [Remove local vm dir] > [ INFO? ] changed: [localhost] > [ INFO? ] TASK [Notify the user about a failure] > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": > "The system may not be provisioned according to the playbook > results: please check the logs for the issue, fix accordingly or > re-deploy from scratch.\n"} > [ ERROR ] [DEPRECATION WARNING]: Using tests as filters is > deprecated. Instead of using > [ ERROR ] `result|succeeded` instead use `result is succeeded`. > This feature will be > [ ERROR ] removed in version 2.9. Deprecation warnings can be > disabled by setting > [ ERROR ] deprecation_warnings=False in ansible.cfg. > [ ERROR ] [DEPRECATION WARNING]: Using tests as filters is > deprecated. Instead of using > [ ERROR ] `result|succeeded` instead use `result is succeeded`. > This feature will be > [ ERROR ] removed in version 2.9. Deprecation warnings can be > disabled by setting > [ ERROR ] deprecation_warnings=False in ansible.cfg. > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > [ INFO? ] Stage: Clean up > [ INFO? ] Cleaning temporary resources > [ INFO? ] TASK [Gathering Facts] > [ INFO? ] ok: [localhost] > [ INFO? ] TASK [include_tasks] > [ INFO? ] ok: [localhost] > [ INFO? ] TASK [Remove local vm dir] > [ INFO? ] ok: [localhost] > [ INFO? ] Generating answer file > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20180501190540.conf' > [ INFO? ] Stage: Pre-termination > [ INFO? ] Stage: Termination > [ ERROR ] Hosted Engine deployment failed: please check the logs > for the issue, fix accordingly or re-deploy from scratch. > ????????? Log file is located at > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log > > Attached is the so long LOG. > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.lkw at gmail.com Tue May 1 16:55:59 2018 From: paul.lkw at gmail.com (Paul.LKW) Date: Wed, 2 May 2018 00:55:59 +0800 Subject: [ovirt-users] hosted-engine --deploy Failed In-Reply-To: References: Message-ID: <92e1829a-44a7-eed8-a759-35e39dcba572@gmail.com> Dear Y: Yes you are right I could see there is a VM up process from "top" list 11273 qemu????? 20?? 0 4896964 3.066g? 13456 S?? 5.0? 9.8 25:36.69 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-HostedEngineLocal/m+................... but however vdsm.log is zero length. Best Regards, Paul.LKW Yaniv Kaul ? 1/5/2018 22:18 ??: > Can you provide other logs, vdsm log, for example? > The installation of the Engine seem to have succeeded, then it tried > to add the host. It was waiting for quite some time to get the host > into 'Up' state. > If, for example, it was installing packages, and the yum repo was very > slow, that may be a reason. > But there may be many other reasons as well. > Y. > > On Tue, May 1, 2018 at 2:43 PM, Paul.LKW > wrote: > > Dear All: > Recently I just make a try to create a Self-Hosted Engine oVirt > but unfortunately both two of my box also failed with bad > deployment experience, first of all the online documentation is > wrong under "oVirt Self-Hosted Engine Guide" section, it says the > deployment script "hosted-engine --deploy" will asking for Storage > configuration immediately but it is not true any more, also on > both of my 2 box one is configured with bond interface and one not > but also failed, in order to separate the issue I think better I > post the bonded interface one log for all your to Ref. first, the > script runs at "TASK [Wait for the host to be up]" for long long > time then give me the Error > > [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": > {"ovirt_hosts": []}, "attempts": 120, "changed": false} > [ INFO? ] TASK [include_tasks] > [ INFO? ] ok: [localhost] > [ INFO? ] TASK [Remove local vm dir] > [ INFO? ] changed: [localhost] > [ INFO? ] TASK [Notify the user about a failure] > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": > "The system may not be provisioned according to the playbook > results: please check the logs for the issue, fix accordingly or > re-deploy from scratch.\n"} > [ ERROR ] [DEPRECATION WARNING]: Using tests as filters is > deprecated. Instead of using > [ ERROR ] `result|succeeded` instead use `result is succeeded`. > This feature will be > [ ERROR ] removed in version 2.9. Deprecation warnings can be > disabled by setting > [ ERROR ] deprecation_warnings=False in ansible.cfg. > [ ERROR ] [DEPRECATION WARNING]: Using tests as filters is > deprecated. Instead of using > [ ERROR ] `result|succeeded` instead use `result is succeeded`. > This feature will be > [ ERROR ] removed in version 2.9. Deprecation warnings can be > disabled by setting > [ ERROR ] deprecation_warnings=False in ansible.cfg. > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > [ INFO? ] Stage: Clean up > [ INFO? ] Cleaning temporary resources > [ INFO? ] TASK [Gathering Facts] > [ INFO? ] ok: [localhost] > [ INFO? ] TASK [include_tasks] > [ INFO? ] ok: [localhost] > [ INFO? ] TASK [Remove local vm dir] > [ INFO? ] ok: [localhost] > [ INFO? ] Generating answer file > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20180501190540.conf' > [ INFO? ] Stage: Pre-termination > [ INFO? ] Stage: Termination > [ ERROR ] Hosted Engine deployment failed: please check the logs > for the issue, fix accordingly or re-deploy from scratch. > ????????? Log file is located at > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180501184459-4v6ctw.log > > Attached is the so long LOG. > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jzygmont at proofpoint.com Tue May 1 20:11:27 2018 From: jzygmont at proofpoint.com (Justin Zygmont) Date: Tue, 1 May 2018 20:11:27 +0000 Subject: [ovirt-users] adding a host Message-ID: I have tried to add a host to the engine and it just takes forever never working or giving any error message. When I look in the engine's server.log I see it says the networks are missing. I thought when you install a node and add it to the engine it will add the networks automatically? The docs don't give much information about this, and I can't even remove the host through the UI. What steps are required to prepare a node when several vlans are involved? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jzygmont at proofpoint.com Wed May 2 00:52:17 2018 From: jzygmont at proofpoint.com (Justin Zygmont) Date: Wed, 2 May 2018 00:52:17 +0000 Subject: [ovirt-users] unable to start engine Message-ID: After rebooting the node hosting the engine, I get this: # hosted-engine --connect-storage # hosted-engine --vm-start The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable. ovirt-ha-agent is running and the NFS server is reachable, it used to work. I don't see which log to check or where to look -------------- next part -------------- An HTML attachment was scrubbed... URL: From yquinn at redhat.com Wed May 2 07:26:18 2018 From: yquinn at redhat.com (Yanir Quinn) Date: Wed, 2 May 2018 10:26:18 +0300 Subject: [ovirt-users] unable to start engine In-Reply-To: References: Message-ID: Hi Justin, What are the version release numbers of ovirt-hosted-engine, ovirt-host, ovirt-engine, vdsm, ovirt-host-deploy, libvirt ? What type of installation are you using ? What is the status of the ovirt-ha-*agent* ovirt-ha-broker services ? Does vm.conf file exist ? (e.g. /var/run/ovirt-hosted-engine-ha/vm.conf) What is the output of hosted-engine --vm-status ? Can you provide agent.log, vdsm.log ? Thanks Yanir Quinn On Wed, May 2, 2018 at 3:52 AM, Justin Zygmont wrote: > After rebooting the node hosting the engine, I get this: > > > > # hosted-engine --connect-storage > > # hosted-engine --vm-start > > The hosted engine configuration has not been retrieved from shared > storage. Please ensure that ovirt-ha-agent is running and the storage > server is reachable. > > > > ovirt-ha-agent is running and the NFS server is reachable, it used to > work. I don?t see which log to check or where to look > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yquinn at redhat.com Wed May 2 07:34:16 2018 From: yquinn at redhat.com (Yanir Quinn) Date: Wed, 2 May 2018 10:34:16 +0300 Subject: [ovirt-users] adding a host In-Reply-To: References: Message-ID: Hi, What document are you using ? See if you find the needed information here : https://ovirt.org/documentation/admin-guide/chap-Hosts/ For engine related potential errors i recommend also checking the engine.log and in UI check the events section. Regards, Yanir Quinn On Tue, May 1, 2018 at 11:11 PM, Justin Zygmont wrote: > I have tried to add a host to the engine and it just takes forever never > working or giving any error message. When I look in the engine?s > server.log I see it says the networks are missing. > > I thought when you install a node and add it to the engine it will add the > networks automatically? The docs don?t give much information about this, > and I can?t even remove the host through the UI. What steps are required > to prepare a node when several vlans are involved? > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Wed May 2 07:44:48 2018 From: frolland at redhat.com (Fred Rolland) Date: Wed, 2 May 2018 10:44:48 +0300 Subject: [ovirt-users] Re-attaching ISOs and moving ISOs storage In-Reply-To: References: Message-ID: Hi, Can you provide logs from engine and Vdsm(SPM)? What is the state now? Thanks, Fred On Tue, May 1, 2018 at 4:11 PM, Callum Smith wrote: > Dear All, > > It appears that clicking "detach" on the ISO storage domain is a really > bad idea. This has gotten half way through the procedure and now can't be > recovered from. Is there any advice for re-attaching the ISO storage domain > manually? An NFS mount didn't add it back to the "pool" unfortunately. > > On a separate note, is it possible to migrate this storage to a new > location? And if so how. > > Regards, > Callum > > -- > > Callum Smith > Research Computing Core > Wellcome Trust Centre for Human Genetics > University of Oxford > e. callum at well.ox.ac.uk > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yquinn at redhat.com Wed May 2 07:53:51 2018 From: yquinn at redhat.com (Yanir Quinn) Date: Wed, 2 May 2018 10:53:51 +0300 Subject: [ovirt-users] unable to start engine In-Reply-To: References: Message-ID: I would also recommend reading: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/troubleshooting On Wed, May 2, 2018 at 10:26 AM, Yanir Quinn wrote: > Hi Justin, > > What are the version release numbers of ovirt-hosted-engine, ovirt-host, > ovirt-engine, vdsm, ovirt-host-deploy, libvirt ? > What type of installation are you using ? > What is the status of the ovirt-ha-*agent* ovirt-ha-broker services ? > Does vm.conf file exist ? (e.g. /var/run/ovirt-hosted-engine-ha/vm.conf) > What is the output of hosted-engine --vm-status ? > Can you provide agent.log, vdsm.log ? > > Thanks > Yanir Quinn > > On Wed, May 2, 2018 at 3:52 AM, Justin Zygmont > wrote: > >> After rebooting the node hosting the engine, I get this: >> >> >> >> # hosted-engine --connect-storage >> >> # hosted-engine --vm-start >> >> The hosted engine configuration has not been retrieved from shared >> storage. Please ensure that ovirt-ha-agent is running and the storage >> server is reachable. >> >> >> >> ovirt-ha-agent is running and the NFS server is reachable, it used to >> work. I don?t see which log to check or where to look >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcmr at oticon.com Wed May 2 07:50:25 2018 From: mcmr at oticon.com (Michael Mortensen (MCMR)) Date: Wed, 2 May 2018 07:50:25 +0000 Subject: [ovirt-users] User portal permissions In-Reply-To: References: Message-ID: Hi Kristian, As far as I?ve experienced myself, too, they would need one or more admin roles/permissions. The permissions can however be quite freely distributed to the user, and since you can create a custom role, too, it should be possible to stitch together a rather limited admin-role. Regular non-admin privileges supposedly only gives access to the VM Portal and not actually the Administration Portal (in which you could do all of this). All the best, Mike From: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] On Behalf Of Kristian Petersen Sent: 1. maj 2018 17:36 To: users Subject: [ovirt-users] User portal permissions I have a user that I want to be able to create VMs, create and attach disks to the new VM, add a network interface to the VM, and get the OS installed so it is ready to use. I am completely confused on which role (or roles) this user would need to do that without making them an admin. Can someone give me some guidance here? -- Kristian Petersen System Administrator BYU Dept. of Chemistry and Biochemistry -------------- next part -------------- An HTML attachment was scrubbed... URL: From suporte at logicworks.pt Wed May 2 08:06:11 2018 From: suporte at logicworks.pt (suporte at logicworks.pt) Date: Wed, 2 May 2018 09:06:11 +0100 (WEST) Subject: [ovirt-users] Hosted Engine Deploy failed In-Reply-To: References: <746036614.128920.1524762390646.JavaMail.zimbra@logicworks.pt> <1286444869.142564.1524784022269.JavaMail.zimbra@logicworks.pt> Message-ID: <84172131.324901.1525248371716.JavaMail.zimbra@logicworks.pt> I managed to overcome giving this input to configuration, without a DNS server: ... How should the engine VM network be configured (DHCP, Static)[DHCP]? Static Please enter the IP address to be used for the engine VM [192.168.50.200]: 192.168.50.190 [ INFO ] The engine VM will be configured to use 192.168.50.190/24 Please provide a comma-separated list (max 3) of IP addresses of domain name servers for the engine VM Engine VM DNS (leave it empty to skip) [127.0.0.1]: Add lines for the appliance itself and for this host to /etc/hosts on the engine VM? Note: ensuring that this host could resolve the engine VM hostname is still up to you (Yes, No)[No] Yes ... De: "Simone Tiraboschi" Para: suporte at logicworks.pt Cc: "users" Enviadas: Sexta-feira, 27 De Abril de 2018 18:04:24 Assunto: Re: [ovirt-users] Hosted Engine Deploy failed On Fri, Apr 27, 2018 at 1:07 AM, < suporte at logicworks.pt > wrote: I found that the setup process creates a different IP on /etc/hosts I choose static IP, enter the IP I want, already have the right configuration on /etc/hosts, but the setup creates another line with a different IP Address. I follow this https://bugzilla.redhat.com/show_bug.cgi?id=1548508 but no luck This is absolutely fine: the node 0 flow creates a locally running VM connected over a natted DHCP network in order to use it to boostrap the rest of the system. At the end, the content of that local temporary VM will be moved over the shared storage and the VM will be restarted from there properly connected to your network. BQ_BEGIN De: suporte at logicworks.pt Para: users at ovirt.org Enviadas: Quinta-feira, 26 De Abril de 2018 18:06:30 Assunto: [ovirt-users] Hosted Engine Deploy failed Hi, I'm trying to install oVirt 4.2.2 HE but always get the error: [ INFO ] TASK [Shutdown local VM] [ INFO ] changed: [localhost] [ INFO ] TASK [Wait for local VM shutdown] [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": true, "cmd": "virsh -r dominfo \"HostedEngineLocal\" | grep State", "delta": "0:00:00.037367", "end": "2018-04-26 17:58:56.480445", "msg": "non-zero return code", "rc": 1, "start": "2018-04-26 17:58:56.443078", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} It can write on the NFS storage but always fail. I have searching around but did not found any solution. Any idea? Log is attached. Versions: CentOS Linux release 7.4.1708 (Core) ovirt-hosted-engine-ha-2.2.10-1.el7.centos.noarch ovirt-release42-4.2.2-3.el7.centos.noarch ovirt-hosted-engine-setup-2.2.16-1.el7.centos.noarch python-ovirt-engine-sdk4-4.2.4-2.el7.centos.x86_64 ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-1.0.4-1.el7.noarch ovirt-imageio-daemon-1.2.2-0.el7.centos.noarch ovirt-provider-ovn-driver-1.2.9-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-engine-appliance-4.2-20180329.1.el7.centos.noarch cockpit-ovirt-dashboard-0.11.20-1.el7.centos.noarch ovirt-imageio-common-1.2.2-0.el7.centos.noarch ovirt-vmconsole-host-1.0.4-1.el7.noarch Thanks -- Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users BQ_END -------------- next part -------------- An HTML attachment was scrubbed... URL: From msivak at redhat.com Wed May 2 08:09:33 2018 From: msivak at redhat.com (Martin Sivak) Date: Wed, 2 May 2018 10:09:33 +0200 Subject: [ovirt-users] unable to start engine In-Reply-To: References: Message-ID: Hi, you are probably running 4.2 in global maintenance mode right? We do not download the vm.conf unless we need it and since you just rebooted the machine it might be missing indeed. It should recover properly if you let the agent do its job and start the engine by itself. It will download the vm.conf in the process. Best regards Martin Sivak On Wed, May 2, 2018 at 2:52 AM, Justin Zygmont wrote: > After rebooting the node hosting the engine, I get this: > > > > # hosted-engine --connect-storage > > # hosted-engine --vm-start > > The hosted engine configuration has not been retrieved from shared storage. > Please ensure that ovirt-ha-agent is running and the storage server is > reachable. > > > > ovirt-ha-agent is running and the NFS server is reachable, it used to work. > I don?t see which log to check or where to look > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From callum at well.ox.ac.uk Wed May 2 08:20:20 2018 From: callum at well.ox.ac.uk (Callum Smith) Date: Wed, 2 May 2018 08:20:20 +0000 Subject: [ovirt-users] Re-attaching ISOs and moving ISOs storage In-Reply-To: References: Message-ID: State is maintenance for the ISOs storage. I've extracted what is hopefully the relevant bits of the log. VDSM.log (SPM) 2018-05-02 09:16:03,455+0100 INFO (ioprocess communication (179084)) [IOProcess] Starting ioprocess (__init__:447) 2018-05-02 09:16:03,456+0100 INFO (ioprocess communication (179091)) [IOProcess] Starting ioprocess (__init__:447) 2018-05-02 09:16:03,461+0100 INFO (jsonrpc/0) [vdsm.api] FINISH activateStorageDomain error=Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' from=::ffff:192.168.64.254,58968, flow_id=93433989-8e26-48a9-bd3a-2ab95f296c08, task_id=7f21f911-348f-45a3-b79c-e3cb11642035 (api:50) 2018-05-02 09:16:03,461+0100 ERROR (jsonrpc/0) [storage.TaskManager.Task] (Task='7f21f911-348f-45a3-b79c-e3cb11642035') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "", line 2, in activateStorageDomain File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1256, in activateStorageDomain pool.activateSD(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1130, in activateSD self.validateAttachedDomain(dom) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 557, in validateAttachedDomain raise se.StorageDomainNotInPool(self.spUUID, dom.sdUUID) StorageDomainNotInPool: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' 2018-05-02 09:16:03,461+0100 INFO (jsonrpc/0) [storage.TaskManager.Task] (Task='7f21f911-348f-45a3-b79c-e3cb11642035') aborting: Task is aborted: "Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304'" - code 353 (task:1181) 2018-05-02 09:16:03,462+0100 ERROR (jsonrpc/0) [storage.Dispatcher] FINISH activateStorageDomain error=Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' (dispatcher:82) engine.log 2018-05-02 09:16:02,326+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (default task-20) [93433989-8e26-48a9-bd3a-2ab95f296c08] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f5914df0-f46c-4cc0-b666-c929aa0225ae=STORAGE]', sharedLocks=''}' 2018-05-02 09:16:02,376+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Running command: ActivateStorageDomainCommand internal: false. Entities affected : ID: f5914df0-f46c-4cc0-b666-c929aa0225ae Type: StorageAction group MANIPULATE_STORAGE_DOMA IN with role type ADMIN 2018-05-02 09:16:02,385+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Lock freed to object 'EngineLock:{exclusiveLocks='[f5914df0-f46c-4cc0-b666-c929aa0225ae=STORAGE]', sharedLocks=''}' 2018-05-02 09:16:02,385+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] ActivateStorage Domain. Before Connect all hosts to pool. Time: Wed May 02 09:16:02 BST 2018 2018-05-02 09:16:02,407+01 INFO [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand] (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] Running command: ConnectStorageToVdsCommand internal: true. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN 2018-05-02 09:16:02,421+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] START, ConnectStorageServerVDSCommand(HostName = virtA003, StorageServerConnectionManagementVDSParameters:{hostId='fe2861fc-2b47-4807-b054-470198eda473', storagePoolId='00000000-0000-0000-0000-000000 000000', storageType='NFS', connectionList='[StorageServerConnections:{id='da392861-aedc-4f1e-97f4-6919fb01f1e9', connection='backoffice01.cluster:/vm-iso', iqn='null', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 23ce648f 2018-05-02 09:16:02,446+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] FINISH, ConnectStorageServerVDSCommand, return: {da392861-aedc-4f1e-97f4-6919fb01f1e9=0}, log id: 23ce648f 2018-05-02 09:16:02,450+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] START, ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSCommandParameters:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', ignoreFailoverLimit='false', stor ageDomainId='f5914df0-f46c-4cc0-b666-c929aa0225ae'}), log id: 5c864594 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command ActivateStorageDomainVDS failed: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a5 4bf81-0228-02bc-0358-000000000304' 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Command 'ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSCommandParameters:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', ignoreFailoverLimit='false', st orageDomainId='f5914df0-f46c-4cc0-b666-c929aa0225ae'})' execution failed: IRSGenericException: IRSErrorException: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' 2018-05-02 09:16:02,635+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] FINISH, ActivateStorageDomainVDSCommand, log id: 5c864594 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Command 'org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.irsbroker.IrsOperationFailedNoFailove rException: IRSGenericException: IRSErrorException: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' (Failed with error StorageDomainNotInPool and code 353) 2018-05-02 09:16:02,636+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Command [id=22b0f3c1-9a09-4e26-8096-d83465c8f4ee]: Compensating CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: EntityStatus Snapshot:{id='StoragePoolIsoMapId:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', storageId='f5914df0-f46c-4cc0-b666-c929aa0225ae'}', status='Maintenance'}. 2018-05-02 09:16:02,660+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] EVENT_ID: USER_ACTIVATE_STORAGE_DOMAIN_FAILED(967), Failed to activate Storage Domain VMISOs (Data Center Default) by admin at internal-authz Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk On 2 May 2018, at 08:44, Fred Rolland > wrote: Hi, Can you provide logs from engine and Vdsm(SPM)? What is the state now? Thanks, Fred On Tue, May 1, 2018 at 4:11 PM, Callum Smith > wrote: Dear All, It appears that clicking "detach" on the ISO storage domain is a really bad idea. This has gotten half way through the procedure and now can't be recovered from. Is there any advice for re-attaching the ISO storage domain manually? An NFS mount didn't add it back to the "pool" unfortunately. On a separate note, is it possible to migrate this storage to a new location? And if so how. Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.lauriou at irisa.fr Wed May 2 08:29:50 2018 From: arnaud.lauriou at irisa.fr (Arnaud Lauriou) Date: Wed, 2 May 2018 10:29:50 +0200 Subject: [ovirt-users] Can't switch ovirt host to maintenance mode : image transfer in progress Message-ID: <9e7b35a7-5647-f86b-3e26-0f2fac40d556@irisa.fr> Hi, While upgrading host from ovirt 4.1.9 to ovirt 4.2.2, got one ovirt host which I can't move to maintenance mode for the following reason : 2018-05-02 10:16:18,789+02 WARN [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] (default task-27) [193c3d45-5b25-4ccf-9d2c-5b792e99b9fe] Validation of action 'MaintenanceNumberOfVdss' failed for user admin at internal-authz. Reasons: VAR__TYPE__HOST,VAR__ACTION__MAINTENANCE,VDS_CANNOT_MAINTENANCE_HOST_WITH_RUNNING_IMAGE_TRANSFERS,$host xxxxx,$disks a1476ae5-990d-45a7-90bd-c2553f8d08d3, b2616eef-bd13-4d9b-a513-52445ebaedb6, 13152865-2753-407a-a0e1-425e09889d92,$disks_COUNTER 3 I don't know what are those 3 images transfers in progress, there is no VM running on this host. I try to list old task running on it with the taskcleaner.sh script on the engine : found nothing. How can I delete or remove those entries ? Regards, Arnaud Lauriou From frolland at redhat.com Wed May 2 09:43:13 2018 From: frolland at redhat.com (Fred Rolland) Date: Wed, 2 May 2018 12:43:13 +0300 Subject: [ovirt-users] Re-attaching ISOs and moving ISOs storage In-Reply-To: References: Message-ID: Which version are you using? Can you provide the whole log? For some reason, it looks like the Vdsm thinks that the Storage Domain is not part of the pool. On Wed, May 2, 2018 at 11:20 AM, Callum Smith wrote: > State is maintenance for the ISOs storage. I've extracted what is > hopefully the relevant bits of the log. > > VDSM.log (SPM) > > 2018-05-02 09:16:03,455+0100 INFO (ioprocess communication (179084)) > [IOProcess] Starting ioprocess (__init__:447) > 2018-05-02 09:16:03,456+0100 INFO (ioprocess communication (179091)) > [IOProcess] Starting ioprocess (__init__:447) > 2018-05-02 09:16:03,461+0100 INFO (jsonrpc/0) [vdsm.api] FINISH > activateStorageDomain error=Storage domain not in pool: > u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, > pool=5a54bf81-0228-02bc-0358-000000000304' from=::ffff:192.168.64.254,58968, > flow_id=93433989-8e26-48a9-bd3a-2ab95f296c08, task_id=7f21f911-348f-45a3-b79c-e3cb11642035 > (api:50) > 2018-05-02 09:16:03,461+0100 ERROR (jsonrpc/0) [storage.TaskManager.Task] > (Task='7f21f911-348f-45a3-b79c-e3cb11642035') Unexpected error (task:875) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, > in _run > return fn(*args, **kargs) > File "", line 2, in activateStorageDomain > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in > method > ret = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1256, > in activateStorageDomain > pool.activateSD(sdUUID) > File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line > 79, in wrapper > return method(self, *args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1130, > in activateSD > self.validateAttachedDomain(dom) > File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line > 79, in wrapper > return method(self, *args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 557, > in validateAttachedDomain > raise se.StorageDomainNotInPool(self.spUUID, dom.sdUUID) > StorageDomainNotInPool: Storage domain not in pool: > u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, > pool=5a54bf81-0228-02bc-0358-000000000304' > 2018-05-02 09:16:03,461+0100 INFO (jsonrpc/0) [storage.TaskManager.Task] > (Task='7f21f911-348f-45a3-b79c-e3cb11642035') aborting: Task is aborted: > "Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, > pool=5a54bf81-0228-02bc-0358-000000000304'" - code 353 (task:1181) > 2018-05-02 09:16:03,462+0100 ERROR (jsonrpc/0) [storage.Dispatcher] FINISH > activateStorageDomain error=Storage domain not in pool: > u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, > pool=5a54bf81-0228-02bc-0358-000000000304' (dispatcher:82) > > engine.log > > 2018-05-02 09:16:02,326+01 INFO [org.ovirt.engine.core.bll. > storage.domain.ActivateStorageDomainCommand] (default task-20) > [93433989-8e26-48a9-bd3a-2ab95f296c08] Lock Acquired to object > 'EngineLock:{exclusiveLocks='[f5914df0-f46c-4cc0-b666-c929aa0225ae=STORAGE]', > sharedLocks=''}' > 2018-05-02 09:16:02,376+01 INFO [org.ovirt.engine.core.bll. > storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) > [93433989-8e26-48a9-bd3a-2ab95f296c08] Running command: > ActivateStorageDomainCommand internal: false. Entities affected : ID: > f5914df0-f46c-4cc0-b666-c929aa0225ae Type: StorageAction group > MANIPULATE_STORAGE_DOMA > IN with role type ADMIN > 2018-05-02 09:16:02,385+01 INFO [org.ovirt.engine.core.bll. > storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) > [93433989-8e26-48a9-bd3a-2ab95f296c08] Lock freed to object > 'EngineLock:{exclusiveLocks='[f5914df0-f46c-4cc0-b666-c929aa0225ae=STORAGE]', > sharedLocks=''}' > 2018-05-02 09:16:02,385+01 INFO [org.ovirt.engine.core.bll. > storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) > [93433989-8e26-48a9-bd3a-2ab95f296c08] ActivateStorage Domain. Before > Connect all hosts to pool. Time: Wed May 02 09:16:02 BST 2018 > 2018-05-02 09:16:02,407+01 INFO [org.ovirt.engine.core.bll. > storage.connection.ConnectStorageToVdsCommand] (EE-ManagedThreadFactory-engine-Thread-33456) > [40a82b47] Running command: ConnectStorageToVdsCommand internal: true. > Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: > SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN > 2018-05-02 09:16:02,421+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] START, > ConnectStorageServerVDSCommand(HostName = virtA003, > StorageServerConnectionManagementVDSParameters:{hostId=' > fe2861fc-2b47-4807-b054-470198eda473', storagePoolId='00000000-0000- > 0000-0000-000000 > 000000', storageType='NFS', connectionList='[ > StorageServerConnections:{id='da392861-aedc-4f1e-97f4-6919fb01f1e9', > connection='backoffice01.cluster:/vm-iso', iqn='null', vfsType='null', > mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', > iface='null', netIfaceName='null'}]'}), log id: 23ce648f > 2018-05-02 09:16:02,446+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] FINISH, > ConnectStorageServerVDSCommand, return: {da392861-aedc-4f1e-97f4-6919fb01f1e9=0}, > log id: 23ce648f > 2018-05-02 09:16:02,450+01 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] > START, ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSComman > dParameters:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', > ignoreFailoverLimit='false', stor > ageDomainId='f5914df0-f46c-4cc0-b666-c929aa0225ae'}), log id: 5c864594 > 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33455) > [93433989-8e26-48a9-bd3a-2ab95f296c08] EVENT_ID: > IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command ActivateStorageDomainVDS > failed: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, > pool=5a5 > 4bf81-0228-02bc-0358-000000000304' > 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core. > vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] > Command 'ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSComman > dParameters:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', > ignoreFailoverLimit='false', st > orageDomainId='f5914df0-f46c-4cc0-b666-c929aa0225ae'})' execution failed: > IRSGenericException: IRSErrorException: Storage domain not in pool: > u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, > pool=5a54bf81-0228-02bc-0358-000000000304' > 2018-05-02 09:16:02,635+01 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] > FINISH, ActivateStorageDomainVDSCommand, log id: 5c864594 > 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.bll. > storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) > [93433989-8e26-48a9-bd3a-2ab95f296c08] Command 'org.ovirt.engine.core.bll. > storage.domain.ActivateStorageDomainCommand' failed: EngineException: > org.ovirt.engine.core.vdsbroker.irsbroker.IrsOperationFailedNoFailove > rException: IRSGenericException: IRSErrorException: Storage domain not in > pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, > pool=5a54bf81-0228-02bc-0358-000000000304' (Failed with error > StorageDomainNotInPool and code 353) > 2018-05-02 09:16:02,636+01 INFO [org.ovirt.engine.core.bll. > storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) > [93433989-8e26-48a9-bd3a-2ab95f296c08] Command > [id=22b0f3c1-9a09-4e26-8096-d83465c8f4ee]: Compensating > CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; > snapshot: EntityStatus > Snapshot:{id='StoragePoolIsoMapId:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', > storageId='f5914df0-f46c-4cc0-b666-c929aa0225ae'}', status='Maintenance'}. > 2018-05-02 09:16:02,660+01 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33455) > [93433989-8e26-48a9-bd3a-2ab95f296c08] EVENT_ID: > USER_ACTIVATE_STORAGE_DOMAIN_FAILED(967), Failed to activate Storage > Domain VMISOs (Data Center Default) by admin at internal-authz > > Regards, > Callum > > -- > > Callum Smith > Research Computing Core > Wellcome Trust Centre for Human Genetics > University of Oxford > e. callum at well.ox.ac.uk > > On 2 May 2018, at 08:44, Fred Rolland wrote: > > Hi, > > Can you provide logs from engine and Vdsm(SPM)? > What is the state now? > > Thanks, > Fred > > On Tue, May 1, 2018 at 4:11 PM, Callum Smith wrote: > >> Dear All, >> >> It appears that clicking "detach" on the ISO storage domain is a really >> bad idea. This has gotten half way through the procedure and now can't be >> recovered from. Is there any advice for re-attaching the ISO storage domain >> manually? An NFS mount didn't add it back to the "pool" unfortunately. >> >> On a separate note, is it possible to migrate this storage to a new >> location? And if so how. >> >> Regards, >> Callum >> >> -- >> >> Callum Smith >> Research Computing Core >> Wellcome Trust Centre for Human Genetics >> University of Oxford >> e. callum at well.ox.ac.uk >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Wed May 2 09:49:07 2018 From: frolland at redhat.com (Fred Rolland) Date: Wed, 2 May 2018 12:49:07 +0300 Subject: [ovirt-users] Can't switch ovirt host to maintenance mode : image transfer in progress In-Reply-To: <9e7b35a7-5647-f86b-3e26-0f2fac40d556@irisa.fr> References: <9e7b35a7-5647-f86b-3e26-0f2fac40d556@irisa.fr> Message-ID: Hi, Maybe you tried to upload/download images, and these are still running. Go to the disks tab and see if you have Upload/Download operations in progress cancel them. You have the option to cancel in the Download/Upload buttons. Regards, Fred On Wed, May 2, 2018 at 11:29 AM, Arnaud Lauriou wrote: > Hi, > > While upgrading host from ovirt 4.1.9 to ovirt 4.2.2, got one ovirt host > which I can't move to maintenance mode for the following reason : > 2018-05-02 10:16:18,789+02 WARN [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] > (default task-27) [193c3d45-5b25-4ccf-9d2c-5b792e99b9fe] Validation of > action 'MaintenanceNumberOfVdss' failed for user admin at internal-authz. > Reasons: VAR__TYPE__HOST,VAR__ACTION__MAINTENANCE,VDS_CANNOT_MAINTENA > NCE_HOST_WITH_RUNNING_IMAGE_TRANSFERS,$host xxxxx,$disks > a1476ae5-990d-45a7-90bd-c2553f8d08d3, > b2616eef-bd13-4d9b-a513-52445ebaedb6, > 13152865-2753-407a-a0e1-425e09889d92,$disks_COUNTER 3 > > I don't know what are those 3 images transfers in progress, there is no VM > running on this host. > I try to list old task running on it with the taskcleaner.sh script on the > engine : found nothing. > > How can I delete or remove those entries ? > > Regards, > > Arnaud Lauriou > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From callum at well.ox.ac.uk Wed May 2 09:53:15 2018 From: callum at well.ox.ac.uk (Callum Smith) Date: Wed, 2 May 2018 09:53:15 +0000 Subject: [ovirt-users] Re-attaching ISOs and moving ISOs storage In-Reply-To: References: Message-ID: <288C5B8E-D9BC-48E5-A130-EA2F9FA9E942@well.ox.ac.uk> This is on 4.2.0.2-1, I've linked the main logs to dropbox simply because they're big, full of noise right now. https://www.dropbox.com/s/f8q3m5amro2a1b2/engine.log?dl=0 https://www.dropbox.com/s/uods85jk65halo3/vdsm.log?dl=0 Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk On 2 May 2018, at 10:43, Fred Rolland > wrote: Which version are you using? Can you provide the whole log? For some reason, it looks like the Vdsm thinks that the Storage Domain is not part of the pool. On Wed, May 2, 2018 at 11:20 AM, Callum Smith > wrote: State is maintenance for the ISOs storage. I've extracted what is hopefully the relevant bits of the log. VDSM.log (SPM) 2018-05-02 09:16:03,455+0100 INFO (ioprocess communication (179084)) [IOProcess] Starting ioprocess (__init__:447) 2018-05-02 09:16:03,456+0100 INFO (ioprocess communication (179091)) [IOProcess] Starting ioprocess (__init__:447) 2018-05-02 09:16:03,461+0100 INFO (jsonrpc/0) [vdsm.api] FINISH activateStorageDomain error=Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' from=::ffff:192.168.64.254,58968, flow_id=93433989-8e26-48a9-bd3a-2ab95f296c08, task_id=7f21f911-348f-45a3-b79c-e3cb11642035 (api:50) 2018-05-02 09:16:03,461+0100 ERROR (jsonrpc/0) [storage.TaskManager.Task] (Task='7f21f911-348f-45a3-b79c-e3cb11642035') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "", line 2, in activateStorageDomain File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1256, in activateStorageDomain pool.activateSD(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1130, in activateSD self.validateAttachedDomain(dom) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 557, in validateAttachedDomain raise se.StorageDomainNotInPool(self.spUUID, dom.sdUUID) StorageDomainNotInPool: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' 2018-05-02 09:16:03,461+0100 INFO (jsonrpc/0) [storage.TaskManager.Task] (Task='7f21f911-348f-45a3-b79c-e3cb11642035') aborting: Task is aborted: "Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304'" - code 353 (task:1181) 2018-05-02 09:16:03,462+0100 ERROR (jsonrpc/0) [storage.Dispatcher] FINISH activateStorageDomain error=Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' (dispatcher:82) engine.log 2018-05-02 09:16:02,326+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (default task-20) [93433989-8e26-48a9-bd3a-2ab95f296c08] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f5914df0-f46c-4cc0-b666-c929aa0225ae=STORAGE]', sharedLocks=''}' 2018-05-02 09:16:02,376+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Running command: ActivateStorageDomainCommand internal: false. Entities affected : ID: f5914df0-f46c-4cc0-b666-c929aa0225ae Type: StorageAction group MANIPULATE_STORAGE_DOMA IN with role type ADMIN 2018-05-02 09:16:02,385+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Lock freed to object 'EngineLock:{exclusiveLocks='[f5914df0-f46c-4cc0-b666-c929aa0225ae=STORAGE]', sharedLocks=''}' 2018-05-02 09:16:02,385+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] ActivateStorage Domain. Before Connect all hosts to pool. Time: Wed May 02 09:16:02 BST 2018 2018-05-02 09:16:02,407+01 INFO [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand] (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] Running command: ConnectStorageToVdsCommand internal: true. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN 2018-05-02 09:16:02,421+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] START, ConnectStorageServerVDSCommand(HostName = virtA003, StorageServerConnectionManagementVDSParameters:{hostId='fe2861fc-2b47-4807-b054-470198eda473', storagePoolId='00000000-0000-0000-0000-000000 000000', storageType='NFS', connectionList='[StorageServerConnections:{id='da392861-aedc-4f1e-97f4-6919fb01f1e9', connection='backoffice01.cluster:/vm-iso', iqn='null', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 23ce648f 2018-05-02 09:16:02,446+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] FINISH, ConnectStorageServerVDSCommand, return: {da392861-aedc-4f1e-97f4-6919fb01f1e9=0}, log id: 23ce648f 2018-05-02 09:16:02,450+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] START, ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSCommandParameters:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', ignoreFailoverLimit='false', stor ageDomainId='f5914df0-f46c-4cc0-b666-c929aa0225ae'}), log id: 5c864594 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command ActivateStorageDomainVDS failed: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a5 4bf81-0228-02bc-0358-000000000304' 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Command 'ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSCommandParameters:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', ignoreFailoverLimit='false', st orageDomainId='f5914df0-f46c-4cc0-b666-c929aa0225ae'})' execution failed: IRSGenericException: IRSErrorException: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' 2018-05-02 09:16:02,635+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] FINISH, ActivateStorageDomainVDSCommand, log id: 5c864594 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Command 'org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.irsbroker.IrsOperationFailedNoFailove rException: IRSGenericException: IRSErrorException: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' (Failed with error StorageDomainNotInPool and code 353) 2018-05-02 09:16:02,636+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Command [id=22b0f3c1-9a09-4e26-8096-d83465c8f4ee]: Compensating CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: EntityStatus Snapshot:{id='StoragePoolIsoMapId:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', storageId='f5914df0-f46c-4cc0-b666-c929aa0225ae'}', status='Maintenance'}. 2018-05-02 09:16:02,660+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] EVENT_ID: USER_ACTIVATE_STORAGE_DOMAIN_FAILED(967), Failed to activate Storage Domain VMISOs (Data Center Default) by admin at internal-authz Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk On 2 May 2018, at 08:44, Fred Rolland > wrote: Hi, Can you provide logs from engine and Vdsm(SPM)? What is the state now? Thanks, Fred On Tue, May 1, 2018 at 4:11 PM, Callum Smith > wrote: Dear All, It appears that clicking "detach" on the ISO storage domain is a really bad idea. This has gotten half way through the procedure and now can't be recovered from. Is there any advice for re-attaching the ISO storage domain manually? An NFS mount didn't add it back to the "pool" unfortunately. On a separate note, is it possible to migrate this storage to a new location? And if so how. Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From i.am.stack at gmail.com Wed May 2 12:03:53 2018 From: i.am.stack at gmail.com (~Stack~) Date: Wed, 2 May 2018 07:03:53 -0500 Subject: [ovirt-users] Is it possible to recover from a failed Engine host? Message-ID: <34dcadc9-6e2c-657a-d065-9af80582860d@gmail.com> Greetings, I have a dev environment where it seems the hard drive on our Engine host kicked the bucket (Yeah, I know. Smartmon. I watch it closely on the systems I care about - this was a learning environment for me so I didn't). The Hypervisors are fine and the VM's running on the Hypervisors are fine...But I can't manage any of the Hypervisors. To make things a bit more tricky, the SQL and the backups were on the drive that died. I really don't have anything from that host. It's dev. I can rebuild. But it is also a learning environment for me so might as well use this to learn. Is it possible for me to build a new Engine host and attach it to an existing hypervisor environment? Better yet, would this be something I could do as a hosted-engine-deploy? (something I haven't experimented with yet.) Again, this is a play ground so if it goes horrifically wrong...oh well. But I would really like to try to recover it for the learning experience. I've been poking around in the documentation but I haven't seen anything that seems to address this issue directly. Thoughts? Thanks! ~Stack~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From awels at redhat.com Wed May 2 12:27:44 2018 From: awels at redhat.com (Alexander Wels) Date: Wed, 02 May 2018 08:27:44 -0400 Subject: [ovirt-users] Is it possible to recover from a failed Engine host? In-Reply-To: <34dcadc9-6e2c-657a-d065-9af80582860d@gmail.com> References: <34dcadc9-6e2c-657a-d065-9af80582860d@gmail.com> Message-ID: <1970074.h4FrLtsj84@awels> On Wednesday, May 2, 2018 8:03:53 AM EDT ~Stack~ wrote: > Greetings, > > I have a dev environment where it seems the hard drive on our Engine > host kicked the bucket (Yeah, I know. Smartmon. I watch it closely on > the systems I care about - this was a learning environment for me so I > didn't). > > The Hypervisors are fine and the VM's running on the Hypervisors are > fine...But I can't manage any of the Hypervisors. To make things a bit > more tricky, the SQL and the backups were on the drive that died. I > really don't have anything from that host. It's dev. I can rebuild. But > it is also a learning environment for me so might as well use this to learn. > > Is it possible for me to build a new Engine host and attach it to an > existing hypervisor environment? Better yet, would this be something I > could do as a hosted-engine-deploy? (something I haven't experimented > with yet.) > > Again, this is a play ground so if it goes horrifically wrong...oh well. > But I would really like to try to recover it for the learning > experience. I've been poking around in the documentation but I haven't > seen anything that seems to address this issue directly. > > Thoughts? > > Thanks! > ~Stack~ As long as the storage domain is in tact you should be able to recover everything. And it does sound like this is the case as the VMs are still running. Basically you just install a new engine somewhere and then do the following: - Create new Data Center - Create new Cluster - You will need a host to add to your cluster. Add this host. - Create a small temporary storage domain, this will allow you to bring up the data center which in turn will allow you to IMPORT the existing storage domain. - Once the DC is up, you can 'import' the existing storage domain, it will warn you that the storage domain is still attached to another DC, but since that engine is gone, you can ignore that. - Once the new DC is imported you can stop/detach/remove the small temporary storage domain, which will make the imported storage domain, the master domain. Once all that is done, you can simply go to the storage domain, and 'import' whatever VM/template you have stored on the storage domain, and it will show up in the VM/template list. Then you add all your hosts and you should have a running environment again. From i.am.stack at gmail.com Wed May 2 12:37:38 2018 From: i.am.stack at gmail.com (~Stack~) Date: Wed, 2 May 2018 07:37:38 -0500 Subject: [ovirt-users] Is it possible to recover from a failed Engine host? In-Reply-To: <1970074.h4FrLtsj84@awels> References: <34dcadc9-6e2c-657a-d065-9af80582860d@gmail.com> <1970074.h4FrLtsj84@awels> Message-ID: On 05/02/2018 07:27 AM, Alexander Wels wrote: > On Wednesday, May 2, 2018 8:03:53 AM EDT ~Stack~ wrote: >> Greetings, >> >> I have a dev environment where it seems the hard drive on our Engine >> host kicked the bucket (Yeah, I know. Smartmon. I watch it closely on >> the systems I care about - this was a learning environment for me so I >> didn't). >> >> The Hypervisors are fine and the VM's running on the Hypervisors are >> fine...But I can't manage any of the Hypervisors. To make things a bit >> more tricky, the SQL and the backups were on the drive that died. I >> really don't have anything from that host. It's dev. I can rebuild. But >> it is also a learning environment for me so might as well use this to learn. >> >> Is it possible for me to build a new Engine host and attach it to an >> existing hypervisor environment? Better yet, would this be something I >> could do as a hosted-engine-deploy? (something I haven't experimented >> with yet.) >> >> Again, this is a play ground so if it goes horrifically wrong...oh well. >> But I would really like to try to recover it for the learning >> experience. I've been poking around in the documentation but I haven't >> seen anything that seems to address this issue directly. >> >> Thoughts? >> >> Thanks! >> ~Stack~ > > As long as the storage domain is in tact you should be able to recover > everything. And it does sound like this is the case as the VMs are still > running. Basically you just install a new engine somewhere and then do the > following: > > - Create new Data Center > - Create new Cluster > - You will need a host to add to your cluster. Add this host. > - Create a small temporary storage domain, this will allow you to bring up the > data center which in turn will allow you to IMPORT the existing storage > domain. > - Once the DC is up, you can 'import' the existing storage domain, it will > warn you that the storage domain is still attached to another DC, but since > that engine is gone, you can ignore that. > - Once the new DC is imported you can stop/detach/remove the small temporary > storage domain, which will make the imported storage domain, the master > domain. > > Once all that is done, you can simply go to the storage domain, and 'import' > whatever VM/template you have stored on the storage domain, and it will show > up in the VM/template list. Then you add all your hosts and you should have a > running environment again. > Thank you! I will give it a try and see what happens. ~Stack~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From P.Staniforth at leedsbeckett.ac.uk Wed May 2 09:31:34 2018 From: P.Staniforth at leedsbeckett.ac.uk (Staniforth, Paul) Date: Wed, 2 May 2018 09:31:34 +0000 Subject: [ovirt-users] User portal permissions In-Reply-To: References: Message-ID: <1525253494796.24588@leedsbeckett.ac.uk> Hello Kristian, I haven't upgraded to 4.2 yet but in 4.1 I create a Directory services group and add the user to it Then I give the user roles VmCreator DiskProfileUser VnicProfileUser These can be added to a DC, storage domain, cluster, etc. The VnicProfileUser means that you can restrict networks and impose QOS, Network Filters,etc. We then add UserTemplateBasedVm permission to give them access to specific templates. For users we allow to create templates we add TemplateCreator permission to their group. These are all user permissions so they can't login to the admin portal. Regards, Paul S. ________________________________ From: users-bounces at ovirt.org on behalf of Kristian Petersen Sent: 01 May 2018 16:36 To: users Subject: [ovirt-users] User portal permissions I have a user that I want to be able to create VMs, create and attach disks to the new VM, add a network interface to the VM, and get the OS installed so it is ready to use. I am completely confused on which role (or roles) this user would need to do that without making them an admin. Can someone give me some guidance here? -- Kristian Petersen System Administrator BYU Dept. of Chemistry and Biochemistry To view the terms under which this email is distributed, please go to:- http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Wed May 2 13:46:55 2018 From: frolland at redhat.com (Fred Rolland) Date: Wed, 2 May 2018 16:46:55 +0300 Subject: [ovirt-users] Re-attaching ISOs and moving ISOs storage In-Reply-To: <288C5B8E-D9BC-48E5-A130-EA2F9FA9E942@well.ox.ac.uk> References: <288C5B8E-D9BC-48E5-A130-EA2F9FA9E942@well.ox.ac.uk> Message-ID: Can you share the REST API data of the Storage domain and Data Center? Here an example of the URLs, you will need to replace with correct ids. http://MY-SERVER/ovirt-engine/api/v4/storagedomains/13461356-f6f7-4a58-9897-2fac61ff40af http://MY-SERVER/ovirt-engine/api/v4/datacenters/5a5df553-022d-036d-01e8-000000000071/storagedomains On Wed, May 2, 2018 at 12:53 PM, Callum Smith wrote: > This is on 4.2.0.2-1, I've linked the main logs to dropbox simply because > they're big, full of noise right now. > https://www.dropbox.com/s/f8q3m5amro2a1b2/engine.log?dl=0 > https://www.dropbox.com/s/uods85jk65halo3/vdsm.log?dl=0 > > Regards, > Callum > > -- > > Callum Smith > Research Computing Core > Wellcome Trust Centre for Human Genetics > University of Oxford > e. callum at well.ox.ac.uk > > On 2 May 2018, at 10:43, Fred Rolland wrote: > > Which version are you using? > Can you provide the whole log? > > For some reason, it looks like the Vdsm thinks that the Storage Domain is > not part of the pool. > > On Wed, May 2, 2018 at 11:20 AM, Callum Smith > wrote: > >> State is maintenance for the ISOs storage. I've extracted what is >> hopefully the relevant bits of the log. >> >> VDSM.log (SPM) >> >> 2018-05-02 09:16:03,455+0100 INFO (ioprocess communication (179084)) >> [IOProcess] Starting ioprocess (__init__:447) >> 2018-05-02 09:16:03,456+0100 INFO (ioprocess communication (179091)) >> [IOProcess] Starting ioprocess (__init__:447) >> 2018-05-02 09:16:03,461+0100 INFO (jsonrpc/0) [vdsm.api] FINISH >> activateStorageDomain error=Storage domain not in pool: >> u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, >> pool=5a54bf81-0228-02bc-0358-000000000304' from=::ffff:192.168.64.254,58968, >> flow_id=93433989-8e26-48a9-bd3a-2ab95f296c08, >> task_id=7f21f911-348f-45a3-b79c-e3cb11642035 (api:50) >> 2018-05-02 09:16:03,461+0100 ERROR (jsonrpc/0) [storage.TaskManager.Task] >> (Task='7f21f911-348f-45a3-b79c-e3cb11642035') Unexpected error (task:875) >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line >> 882, in _run >> return fn(*args, **kargs) >> File "", line 2, in activateStorageDomain >> File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, >> in method >> ret = func(*args, **kwargs) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line >> 1256, in activateStorageDomain >> pool.activateSD(sdUUID) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", >> line 79, in wrapper >> return method(self, *args, **kwargs) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1130, >> in activateSD >> self.validateAttachedDomain(dom) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", >> line 79, in wrapper >> return method(self, *args, **kwargs) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 557, >> in validateAttachedDomain >> raise se.StorageDomainNotInPool(self.spUUID, dom.sdUUID) >> StorageDomainNotInPool: Storage domain not in pool: >> u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, >> pool=5a54bf81-0228-02bc-0358-000000000304' >> 2018-05-02 09:16:03,461+0100 INFO (jsonrpc/0) [storage.TaskManager.Task] >> (Task='7f21f911-348f-45a3-b79c-e3cb11642035') aborting: Task is aborted: >> "Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, >> pool=5a54bf81-0228-02bc-0358-000000000304'" - code 353 (task:1181) >> 2018-05-02 09:16:03,462+0100 ERROR (jsonrpc/0) [storage.Dispatcher] >> FINISH activateStorageDomain error=Storage domain not in pool: >> u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, >> pool=5a54bf81-0228-02bc-0358-000000000304' (dispatcher:82) >> >> engine.log >> >> 2018-05-02 09:16:02,326+01 INFO [org.ovirt.engine.core.bll.st >> orage.domain.ActivateStorageDomainCommand] (default task-20) >> [93433989-8e26-48a9-bd3a-2ab95f296c08] Lock Acquired to object >> 'EngineLock:{exclusiveLocks='[f5914df0-f46c-4cc0-b666-c929aa0225ae=STORAGE]', >> sharedLocks=''}' >> 2018-05-02 09:16:02,376+01 INFO [org.ovirt.engine.core.bll.st >> orage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) >> [93433989-8e26-48a9-bd3a-2ab95f296c08] Running command: >> ActivateStorageDomainCommand internal: false. Entities affected : ID: >> f5914df0-f46c-4cc0-b666-c929aa0225ae Type: StorageAction group >> MANIPULATE_STORAGE_DOMA >> IN with role type ADMIN >> 2018-05-02 09:16:02,385+01 INFO [org.ovirt.engine.core.bll.st >> orage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) >> [93433989-8e26-48a9-bd3a-2ab95f296c08] Lock freed to object >> 'EngineLock:{exclusiveLocks='[f5914df0-f46c-4cc0-b666-c929aa0225ae=STORAGE]', >> sharedLocks=''}' >> 2018-05-02 09:16:02,385+01 INFO [org.ovirt.engine.core.bll.st >> orage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) >> [93433989-8e26-48a9-bd3a-2ab95f296c08] ActivateStorage Domain. Before >> Connect all hosts to pool. Time: Wed May 02 09:16:02 BST 2018 >> 2018-05-02 09:16:02,407+01 INFO [org.ovirt.engine.core.bll.st >> orage.connection.ConnectStorageToVdsCommand] >> (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] Running >> command: ConnectStorageToVdsCommand internal: true. Entities affected : >> ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group >> CREATE_STORAGE_DOMAIN with role type ADMIN >> 2018-05-02 09:16:02,421+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.ConnectStorageServerVDSCommand] >> (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] START, >> ConnectStorageServerVDSCommand(HostName = virtA003, >> StorageServerConnectionManagementVDSParameters:{hostId='fe28 >> 61fc-2b47-4807-b054-470198eda473', storagePoolId='00000000-0000-0 >> 000-0000-000000 >> 000000', storageType='NFS', connectionList='[StorageServer >> Connections:{id='da392861-aedc-4f1e-97f4-6919fb01f1e9', >> connection='backoffice01.cluster:/vm-iso', iqn='null', vfsType='null', >> mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', >> iface='null', netIfaceName='null'}]'}), log id: 23ce648f >> 2018-05-02 09:16:02,446+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.ConnectStorageServerVDSCommand] >> (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] FINISH, >> ConnectStorageServerVDSCommand, return: {da392861-aedc-4f1e-97f4-6919fb01f1e9=0}, >> log id: 23ce648f >> 2018-05-02 09:16:02,450+01 INFO [org.ovirt.engine.core.vdsbro >> ker.irsbroker.ActivateStorageDomainVDSCommand] >> (EE-ManagedThreadFactory-engine-Thread-33455) >> [93433989-8e26-48a9-bd3a-2ab95f296c08] START, >> ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSComman >> dParameters:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', >> ignoreFailoverLimit='false', stor >> ageDomainId='f5914df0-f46c-4cc0-b666-c929aa0225ae'}), log id: 5c864594 >> 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33455) >> [93433989-8e26-48a9-bd3a-2ab95f296c08] EVENT_ID: >> IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command >> ActivateStorageDomainVDS failed: Storage domain not in pool: >> u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a5 >> 4bf81-0228-02bc-0358-000000000304' >> 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.vdsbrok >> er.irsbroker.ActivateStorageDomainVDSCommand] >> (EE-ManagedThreadFactory-engine-Thread-33455) >> [93433989-8e26-48a9-bd3a-2ab95f296c08] Command >> 'ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSComman >> dParameters:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', >> ignoreFailoverLimit='false', st >> orageDomainId='f5914df0-f46c-4cc0-b666-c929aa0225ae'})' execution >> failed: IRSGenericException: IRSErrorException: Storage domain not in pool: >> u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, >> pool=5a54bf81-0228-02bc-0358-000000000304' >> 2018-05-02 09:16:02,635+01 INFO [org.ovirt.engine.core.vdsbro >> ker.irsbroker.ActivateStorageDomainVDSCommand] >> (EE-ManagedThreadFactory-engine-Thread-33455) >> [93433989-8e26-48a9-bd3a-2ab95f296c08] FINISH, >> ActivateStorageDomainVDSCommand, log id: 5c864594 >> 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.bll.sto >> rage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) >> [93433989-8e26-48a9-bd3a-2ab95f296c08] Command >> 'org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand' >> failed: EngineException: org.ovirt.engine.core.vdsbroke >> r.irsbroker.IrsOperationFailedNoFailove >> rException: IRSGenericException: IRSErrorException: Storage domain not in >> pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, >> pool=5a54bf81-0228-02bc-0358-000000000304' (Failed with error >> StorageDomainNotInPool and code 353) >> 2018-05-02 09:16:02,636+01 INFO [org.ovirt.engine.core.bll.st >> orage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) >> [93433989-8e26-48a9-bd3a-2ab95f296c08] Command >> [id=22b0f3c1-9a09-4e26-8096-d83465c8f4ee]: Compensating >> CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.b >> usinessentities.StoragePoolIsoMap; snapshot: EntityStatus >> Snapshot:{id='StoragePoolIsoMapId:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', >> storageId='f5914df0-f46c-4cc0-b666-c929aa0225ae'}', >> status='Maintenance'}. >> 2018-05-02 09:16:02,660+01 ERROR [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33455) >> [93433989-8e26-48a9-bd3a-2ab95f296c08] EVENT_ID: >> USER_ACTIVATE_STORAGE_DOMAIN_FAILED(967), Failed to activate Storage >> Domain VMISOs (Data Center Default) by admin at internal-authz >> >> Regards, >> Callum >> >> -- >> >> Callum Smith >> Research Computing Core >> Wellcome Trust Centre for Human Genetics >> University of Oxford >> e. callum at well.ox.ac.uk >> >> On 2 May 2018, at 08:44, Fred Rolland wrote: >> >> Hi, >> >> Can you provide logs from engine and Vdsm(SPM)? >> What is the state now? >> >> Thanks, >> Fred >> >> On Tue, May 1, 2018 at 4:11 PM, Callum Smith >> wrote: >> >>> Dear All, >>> >>> It appears that clicking "detach" on the ISO storage domain is a really >>> bad idea. This has gotten half way through the procedure and now can't be >>> recovered from. Is there any advice for re-attaching the ISO storage domain >>> manually? An NFS mount didn't add it back to the "pool" unfortunately. >>> >>> On a separate note, is it possible to migrate this storage to a new >>> location? And if so how. >>> >>> Regards, >>> Callum >>> >>> -- >>> >>> Callum Smith >>> Research Computing Core >>> Wellcome Trust Centre for Human Genetics >>> University of Oxford >>> e. callum at well.ox.ac.uk >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From callum at well.ox.ac.uk Wed May 2 14:09:14 2018 From: callum at well.ox.ac.uk (Callum Smith) Date: Wed, 2 May 2018 14:09:14 +0000 Subject: [ovirt-users] Re-attaching ISOs and moving ISOs storage In-Reply-To: References: <288C5B8E-D9BC-48E5-A130-EA2F9FA9E942@well.ox.ac.uk> Message-ID: Attached, thank you for looking into this https://HOSTNAME/ovirt-engine/api/v4/storagedomains/f5914df0-f46c-4cc0-b666-c929aa0225ae VMISOs 11770357874688 false 0 5 false ok false
backoffice01.cluster
/vm-iso nfs
v1 false false iso 38654705664 10 false
https://HOSTNAME/ovirt-engine/api/v4/datacenters/5a54bf81-0228-02bc-0358-000000000304/storagedomains tegile-virtman-backup 17519171600384 false 0 5 false ok false maintenance
192.168.64.248
auto /export/virtman/backup nfs
v1 false false export 8589934592 10 false
VMStorage 11770357874688 false 118111600640 5 false ok true active
backoffice01.cluster
/vm-storage2 nfs
v4 false false data 38654705664 10 false
tegile-virtman 2190433320960 false 226559524864 5 false ok false active
192.168.64.248
auto /export/virtman/VirtualServerShare_1 nfs
v4 false false data 8589934592 10 false
VMISOs 11770357874688 false 0 5 false ok false maintenance
backoffice01.cluster
/vm-iso nfs
v1 false false iso 38654705664 10 false
[cid:921E5257-26F4-4E0D-83B0-68F996842D08 at well.ox.ac.uk] Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk On 2 May 2018, at 14:46, Fred Rolland > wrote: Can you share the REST API data of the Storage domain and Data Center? Here an example of the URLs, you will need to replace with correct ids. http://MY-SERVER/ovirt-engine/api/v4/storagedomains/13461356-f6f7-4a58-9897-2fac61ff40af http://MY-SERVER/ovirt-engine/api/v4/datacenters/5a5df553-022d-036d-01e8-000000000071/storagedomains On Wed, May 2, 2018 at 12:53 PM, Callum Smith > wrote: This is on 4.2.0.2-1, I've linked the main logs to dropbox simply because they're big, full of noise right now. https://www.dropbox.com/s/f8q3m5amro2a1b2/engine.log?dl=0 https://www.dropbox.com/s/uods85jk65halo3/vdsm.log?dl=0 Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk On 2 May 2018, at 10:43, Fred Rolland > wrote: Which version are you using? Can you provide the whole log? For some reason, it looks like the Vdsm thinks that the Storage Domain is not part of the pool. On Wed, May 2, 2018 at 11:20 AM, Callum Smith > wrote: State is maintenance for the ISOs storage. I've extracted what is hopefully the relevant bits of the log. VDSM.log (SPM) 2018-05-02 09:16:03,455+0100 INFO (ioprocess communication (179084)) [IOProcess] Starting ioprocess (__init__:447) 2018-05-02 09:16:03,456+0100 INFO (ioprocess communication (179091)) [IOProcess] Starting ioprocess (__init__:447) 2018-05-02 09:16:03,461+0100 INFO (jsonrpc/0) [vdsm.api] FINISH activateStorageDomain error=Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' from=::ffff:192.168.64.254,58968, flow_id=93433989-8e26-48a9-bd3a-2ab95f296c08, task_id=7f21f911-348f-45a3-b79c-e3cb11642035 (api:50) 2018-05-02 09:16:03,461+0100 ERROR (jsonrpc/0) [storage.TaskManager.Task] (Task='7f21f911-348f-45a3-b79c-e3cb11642035') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "", line 2, in activateStorageDomain File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1256, in activateStorageDomain pool.activateSD(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1130, in activateSD self.validateAttachedDomain(dom) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 557, in validateAttachedDomain raise se.StorageDomainNotInPool(self.spUUID, dom.sdUUID) StorageDomainNotInPool: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' 2018-05-02 09:16:03,461+0100 INFO (jsonrpc/0) [storage.TaskManager.Task] (Task='7f21f911-348f-45a3-b79c-e3cb11642035') aborting: Task is aborted: "Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304'" - code 353 (task:1181) 2018-05-02 09:16:03,462+0100 ERROR (jsonrpc/0) [storage.Dispatcher] FINISH activateStorageDomain error=Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' (dispatcher:82) engine.log 2018-05-02 09:16:02,326+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (default task-20) [93433989-8e26-48a9-bd3a-2ab95f296c08] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f5914df0-f46c-4cc0-b666-c929aa0225ae=STORAGE]', sharedLocks=''}' 2018-05-02 09:16:02,376+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Running command: ActivateStorageDomainCommand internal: false. Entities affected : ID: f5914df0-f46c-4cc0-b666-c929aa0225ae Type: StorageAction group MANIPULATE_STORAGE_DOMA IN with role type ADMIN 2018-05-02 09:16:02,385+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Lock freed to object 'EngineLock:{exclusiveLocks='[f5914df0-f46c-4cc0-b666-c929aa0225ae=STORAGE]', sharedLocks=''}' 2018-05-02 09:16:02,385+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] ActivateStorage Domain. Before Connect all hosts to pool. Time: Wed May 02 09:16:02 BST 2018 2018-05-02 09:16:02,407+01 INFO [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand] (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] Running command: ConnectStorageToVdsCommand internal: true. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN 2018-05-02 09:16:02,421+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] START, ConnectStorageServerVDSCommand(HostName = virtA003, StorageServerConnectionManagementVDSParameters:{hostId='fe2861fc-2b47-4807-b054-470198eda473', storagePoolId='00000000-0000-0000-0000-000000 000000', storageType='NFS', connectionList='[StorageServerConnections:{id='da392861-aedc-4f1e-97f4-6919fb01f1e9', connection='backoffice01.cluster:/vm-iso', iqn='null', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 23ce648f 2018-05-02 09:16:02,446+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] FINISH, ConnectStorageServerVDSCommand, return: {da392861-aedc-4f1e-97f4-6919fb01f1e9=0}, log id: 23ce648f 2018-05-02 09:16:02,450+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] START, ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSCommandParameters:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', ignoreFailoverLimit='false', stor ageDomainId='f5914df0-f46c-4cc0-b666-c929aa0225ae'}), log id: 5c864594 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command ActivateStorageDomainVDS failed: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a5 4bf81-0228-02bc-0358-000000000304' 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Command 'ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSCommandParameters:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', ignoreFailoverLimit='false', st orageDomainId='f5914df0-f46c-4cc0-b666-c929aa0225ae'})' execution failed: IRSGenericException: IRSErrorException: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' 2018-05-02 09:16:02,635+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] FINISH, ActivateStorageDomainVDSCommand, log id: 5c864594 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Command 'org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.irsbroker.IrsOperationFailedNoFailove rException: IRSGenericException: IRSErrorException: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' (Failed with error StorageDomainNotInPool and code 353) 2018-05-02 09:16:02,636+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Command [id=22b0f3c1-9a09-4e26-8096-d83465c8f4ee]: Compensating CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: EntityStatus Snapshot:{id='StoragePoolIsoMapId:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', storageId='f5914df0-f46c-4cc0-b666-c929aa0225ae'}', status='Maintenance'}. 2018-05-02 09:16:02,660+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] EVENT_ID: USER_ACTIVATE_STORAGE_DOMAIN_FAILED(967), Failed to activate Storage Domain VMISOs (Data Center Default) by admin at internal-authz Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk On 2 May 2018, at 08:44, Fred Rolland > wrote: Hi, Can you provide logs from engine and Vdsm(SPM)? What is the state now? Thanks, Fred On Tue, May 1, 2018 at 4:11 PM, Callum Smith > wrote: Dear All, It appears that clicking "detach" on the ISO storage domain is a really bad idea. This has gotten half way through the procedure and now can't be recovered from. Is there any advice for re-attaching the ISO storage domain manually? An NFS mount didn't add it back to the "pool" unfortunately. On a separate note, is it possible to migrate this storage to a new location? And if so how. Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-05-02 at 14.54.42.png Type: image/png Size: 24091 bytes Desc: Screen Shot 2018-05-02 at 14.54.42.png URL: From i.am.stack at gmail.com Wed May 2 19:49:24 2018 From: i.am.stack at gmail.com (~Stack~) Date: Wed, 2 May 2018 14:49:24 -0500 Subject: [ovirt-users] Remote DB: How do you set server_version? Message-ID: <3f809af7-4e88-68ed-bb65-99a7584ae8a3@gmail.com> Greetings, Exploring hosting my engine and ovirt_engine_history db's on my dedicated PostgreSQL server. This is a 9.5 install on a beefy box from the postgresql.org yum repos that I'm using for other SQL needs too. 9.5.12 to be exact. I set up the database just as the documentation says and I'm doing a fresh install of my engine-setup. During the install, right after I give it the details for the remote I get this error: [ ERROR ] Please set: server_version = 9.5.9 in postgresql.conf on 'None'. Its location is usually /var/lib/pgsql/data , or somewhere under /etc/postgresql* . Huh? Um. OK. $ grep ^server_version postgresql.conf server_version = 9.5.9 $ systemctl restart postgresql-9.5.service LOG: syntax error in file "/var/lib/pgsql/9.5/data/postgresql.conf" line 33, n...n ".9" FATAL: configuration file "/var/lib/pgsql/9.5/data/postgresql.conf" contains errors Well that didn't work. Let's try something else. $ grep ^server_version postgresql.conf server_version = 9.5.9 $ systemctl restart postgresql-9.5.service LOG: parameter "server_version" cannot be changed FATAL: configuration file "/var/lib/pgsql/9.5/data/postgresql.conf" contains errors Whelp. That didn't work either. I can't seem to find anything in the oVirt docs on setting this. How am I supposed to do this? Thanks! ~Stack~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From jlawrence at squaretrade.com Wed May 2 20:26:45 2018 From: jlawrence at squaretrade.com (Jamie Lawrence) Date: Wed, 2 May 2018 13:26:45 -0700 Subject: [ovirt-users] Remote DB: How do you set server_version? In-Reply-To: <3f809af7-4e88-68ed-bb65-99a7584ae8a3@gmail.com> References: <3f809af7-4e88-68ed-bb65-99a7584ae8a3@gmail.com> Message-ID: <8D262853-F744-4CFA-976C-E393EC9A9996@squaretrade.com> I've been down this road. Postgres won't lie about its version for you. If you want to do this, you have to patch the Ovirt installer[1]. I stopped trying to use my PG cluster at some point - the relationship between the installer and the product combined with the overly restrictive requirements baked into the installer[2]) makes doing so an ongoing hassle. So I treat Ovirt's PG as an black box; disappointing, considering that we are a very heavy PG shop with a lot of expertise and automation I can't use with Ovirt. If nothing has changed (my notes are from a few versions ago), everything you need to correct is in /usr/share/ovirt-engine/setup/ovirt_engine_setup/engine_common/constants.py Aside from the version, you'll also have to make the knobs for vacuuming match those of your current installation, and I think there was another configurable for something else I'm not remembering right now. Be aware that doing so is accepting an ongoing commitment to monkeying with the installer a lot. At one time I thought doing so was the right tradeoff, but it turns out I was wrong. -j [1] Or you could rebuild PG with a fake version. That option was unavailable here. [2] Not criticizing, just stating a technical fact. How folks apportion their QA resources is their business. > On May 2, 2018, at 12:49 PM, ~Stack~ wrote: > > Greetings, > > Exploring hosting my engine and ovirt_engine_history db's on my > dedicated PostgreSQL server. > > This is a 9.5 install on a beefy box from the postgresql.org yum repos > that I'm using for other SQL needs too. 9.5.12 to be exact. I set up the > database just as the documentation says and I'm doing a fresh install of > my engine-setup. > > During the install, right after I give it the details for the remote I > get this error: > [ ERROR ] Please set: > server_version = 9.5.9 > in postgresql.conf on 'None'. Its location is usually > /var/lib/pgsql/data , or somewhere under /etc/postgresql* . > > Huh? > > Um. OK. > $ grep ^server_version postgresql.conf > server_version = 9.5.9 > > $ systemctl restart postgresql-9.5.service > > LOG: syntax error in file "/var/lib/pgsql/9.5/data/postgresql.conf" > line 33, n...n ".9" > FATAL: configuration file "/var/lib/pgsql/9.5/data/postgresql.conf" > contains errors > > > Well that didn't work. Let's try something else. > > $ grep ^server_version postgresql.conf > server_version = 9.5.9 > > $ systemctl restart postgresql-9.5.service > LOG: parameter "server_version" cannot be changed > FATAL: configuration file "/var/lib/pgsql/9.5/data/postgresql.conf" > contains errors > > Whelp. That didn't work either. I can't seem to find anything in the > oVirt docs on setting this. > > How am I supposed to do this? > > Thanks! > ~Stack~ > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From marceloltmm at gmail.com Wed May 2 20:55:17 2018 From: marceloltmm at gmail.com (Marcelo Leandro) Date: Wed, 2 May 2018 17:55:17 -0300 Subject: [ovirt-users] problem to create snapshot Message-ID: Hello , I am geting error when try do a snapshot: Error msg in SPM log. 2018-05-02 17:46:11,235-0300 WARN (tasks/2) [storage.ResourceManager] Resource factory failed to create resource '01_img_6e5cce71-3438-4045-9d54-607123e0557e.ed7f1c0f-5986-4979-b783-5c465b0854c6'. Canceling request. (resourceManager:543) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 539, in registerResource obj = namespaceObj.factory.createResource(name, lockType) File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", line 193, in createResource lockType) File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", line 122, in __getResourceCandidatesList imgUUID=resourceName) File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 213, in getChain if srcVol.isLeaf(): File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1430, in isLeaf return self._manifest.isLeaf() File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 138, in isLeaf return self.getVolType() == sc.type2name(sc.LEAF_VOL) File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 134, in getVolType self.voltype = self.getMetaParam(sc.VOLTYPE) File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 118, in getMetaParam meta = self.getMetadata() File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 112, in getMetadata md = VolumeMetadata.from_lines(lines) File "/usr/lib/python2.7/site-packages/vdsm/storage/volumemetadata.py", line 103, in from_lines "Missing metadata key: %s: found: %s" % (e, md)) MetaDataKeyNotFoundError: Meta Data key not found error: ("Missing metadata key: 'DOMAIN': found: {'NONE': '######################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################'}",) 2018-05-02 17:46:11,286-0300 WARN (tasks/2) [storage.ResourceManager.Request] (ResName='01_img_6e5cce71-3438-4045-9d54-607123e0557e.ed7f1c0f-5986-4979-b783-5c465b0854c6', ReqID='a3cd9388-977b-45b9-9aa0-e431aeff8750') Tried to cancel a processed request (resourceManager:187) 2018-05-02 17:46:11,286-0300 ERROR (tasks/2) [storage.TaskManager.Task] (Task='ba0766ca-08a1-4d65-a4e9-1e0171939037') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1938, in createVolume with rm.acquireResource(img_ns, imgUUID, rm.EXCLUSIVE): File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 1025, in acquireResource return _manager.acquireResource(namespace, name, lockType, timeout=timeout) File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 475, in acquireResource raise se.ResourceAcqusitionFailed() ResourceAcqusitionFailed: Could not acquire resource. Probably resource factory threw an exception.: () Anyone help? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jzygmont at proofpoint.com Wed May 2 21:03:09 2018 From: jzygmont at proofpoint.com (Justin Zygmont) Date: Wed, 2 May 2018 21:03:09 +0000 Subject: [ovirt-users] unable to start engine In-Reply-To: References: Message-ID: This link is good, thanks. From: Yanir Quinn [mailto:yquinn at redhat.com] Sent: Wednesday, May 2, 2018 12:54 AM To: Justin Zygmont Cc: users at ovirt.org Subject: Re: [ovirt-users] unable to start engine I would also recommend reading: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/troubleshooting On Wed, May 2, 2018 at 10:26 AM, Yanir Quinn > wrote: Hi Justin, What are the version release numbers of ovirt-hosted-engine, ovirt-host, ovirt-engine, vdsm, ovirt-host-deploy, libvirt ? What type of installation are you using ? What is the status of the ovirt-ha-agent ovirt-ha-broker services ? Does vm.conf file exist ? (e.g. /var/run/ovirt-hosted-engine-ha/vm.conf) What is the output of hosted-engine --vm-status ? Can you provide agent.log, vdsm.log ? Thanks Yanir Quinn On Wed, May 2, 2018 at 3:52 AM, Justin Zygmont > wrote: After rebooting the node hosting the engine, I get this: # hosted-engine --connect-storage # hosted-engine --vm-start The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable. ovirt-ha-agent is running and the NFS server is reachable, it used to work. I don?t see which log to check or where to look _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From jzygmont at proofpoint.com Wed May 2 21:04:25 2018 From: jzygmont at proofpoint.com (Justin Zygmont) Date: Wed, 2 May 2018 21:04:25 +0000 Subject: [ovirt-users] unable to start engine In-Reply-To: References: Message-ID: Pretty close, it was a reboot of the host that had been running the engine, no maint was set. It came back on its own, I guess all it takes is it boot back up and wait? -----Original Message----- From: Martin Sivak [mailto:msivak at redhat.com] Sent: Wednesday, May 2, 2018 1:10 AM To: Justin Zygmont Cc: users at ovirt.org Subject: Re: [ovirt-users] unable to start engine Hi, you are probably running 4.2 in global maintenance mode right? We do not download the vm.conf unless we need it and since you just rebooted the machine it might be missing indeed. It should recover properly if you let the agent do its job and start the engine by itself. It will download the vm.conf in the process. Best regards Martin Sivak On Wed, May 2, 2018 at 2:52 AM, Justin Zygmont wrote: > After rebooting the node hosting the engine, I get this: > > > > # hosted-engine --connect-storage > > # hosted-engine --vm-start > > The hosted engine configuration has not been retrieved from shared storage. > Please ensure that ovirt-ha-agent is running and the storage server is > reachable. > > > > ovirt-ha-agent is running and the NFS server is reachable, it used to work. > I don?t see which log to check or where to look > > > _______________________________________________ > Users mailing list > Users at ovirt.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ovirt.org_ma > ilman_listinfo_users&d=DwIFaQ&c=Vxt5e0Osvvt2gflwSlsJ5DmPGcPvTRKLJyp031rXjhg&r=FiPhL0Cl1ymZlnTyAIIL75tE4L0reHcDdD-7wUtUGHA&m=8tQlwRfQPKcvoS9q-_aBf6wmkipE7qhk5BljU5nPDn0&s=LtaekvpEMs7QzLdu_hwlmZwODxlHEaZsbAT8yyCJ9B4&e= > From rgolan at redhat.com Wed May 2 21:13:13 2018 From: rgolan at redhat.com (Roy Golan) Date: Wed, 02 May 2018 21:13:13 +0000 Subject: [ovirt-users] Remote DB: How do you set server_version? In-Reply-To: <8D262853-F744-4CFA-976C-E393EC9A9996@squaretrade.com> References: <3f809af7-4e88-68ed-bb65-99a7584ae8a3@gmail.com> <8D262853-F744-4CFA-976C-E393EC9A9996@squaretrade.com> Message-ID: On Wed, 2 May 2018 at 23:27 Jamie Lawrence wrote: > > I've been down this road. Postgres won't lie about its version for you. > If you want to do this, you have to patch the Ovirt installer[1]. I stopped > trying to use my PG cluster at some point - the relationship between the > installer and the product combined with the overly restrictive requirements > baked into the installer[2]) makes doing so an ongoing hassle. So I treat > Ovirt's PG as an black box; disappointing, considering that we are a very > heavy PG shop with a lot of expertise and automation I can't use with Ovirt. > > If nothing has changed (my notes are from a few versions ago), everything > you need to correct is in > > /usr/share/ovirt-engine/setup/ovirt_engine_setup/engine_common/constants.py > > Aside from the version, you'll also have to make the knobs for vacuuming > match those of your current installation, and I think there was another > configurable for something else I'm not remembering right now. > > Be aware that doing so is accepting an ongoing commitment to monkeying > with the installer a lot. At one time I thought doing so was the right > tradeoff, but it turns out I was wrong. > > -j > > [1] Or you could rebuild PG with a fake version. That option was > unavailable here. > [2] Not criticizing, just stating a technical fact. How folks apportion > their QA resources is their business. > > > On May 2, 2018, at 12:49 PM, ~Stack~ wrote: > > > > Greetings, > > > > Exploring hosting my engine and ovirt_engine_history db's on my > > dedicated PostgreSQL server. > > > > This is a 9.5 install on a beefy box from the postgresql.org yum repos > > that I'm using for other SQL needs too. 9.5.12 to be exact. I set up the > > database just as the documentation says and I'm doing a fresh install of > > my engine-setup. > > > > During the install, right after I give it the details for the remote I > > get this error: > > [ ERROR ] Please set: > > server_version = 9.5.9 > > in postgresql.conf on 'None'. Its location is usually > > /var/lib/pgsql/data , or somewhere under /etc/postgresql* . > > > > Huh? > > > Yes it's annoying and I think +Yaniv Dary opened a bug for it after both of got mad at it. Yaniv? Meanwhile let us know if you were able to patch constants.py as suggested. > Um. OK. > > $ grep ^server_version postgresql.conf > > server_version = 9.5.9 > > > > $ systemctl restart postgresql-9.5.service > > > > LOG: syntax error in file "/var/lib/pgsql/9.5/data/postgresql.conf" > > line 33, n...n ".9" > > FATAL: configuration file "/var/lib/pgsql/9.5/data/postgresql.conf" > > contains errors > > > > > > Well that didn't work. Let's try something else. > > > > $ grep ^server_version postgresql.conf > > server_version = 9.5.9 > > > > $ systemctl restart postgresql-9.5.service > > LOG: parameter "server_version" cannot be changed > > FATAL: configuration file "/var/lib/pgsql/9.5/data/postgresql.conf" > > contains errors > > > > Whelp. That didn't work either. I can't seem to find anything in the > > oVirt docs on setting this. > > > > How am I supposed to do this? > > > > Thanks! > > ~Stack~ > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jzygmont at proofpoint.com Wed May 2 21:47:22 2018 From: jzygmont at proofpoint.com (Justin Zygmont) Date: Wed, 2 May 2018 21:47:22 +0000 Subject: [ovirt-users] adding a host In-Reply-To: References: Message-ID: I read this page and it doesn?t help since this is a host that can?t be removed, the ?remove? button is dimmed out. This is 4.22 ovirt node, but the host stays in a ?non operational? state. I notice the logs have a lot of errors, for example: the SERVER log: 2018-05-02 14:40:23,847-07 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (ForkJoinPool-1-worker-14) IJ000609: Attempt to return connection twice: org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null]: java.lang.Throwable: STACKTRACE at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:722) at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:611) at org.jboss.jca.core.connectionmanager.pool.AbstractPool.returnConnection(AbstractPool.java:847) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.returnManagedConnection(AbstractConnectionManager.java:725) at org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl.managedConnectionDisconnected(TxConnectionManagerImpl.java:585) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.disconnectManagedConnection(AbstractConnectionManager.java:988) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.reconnectManagedConnection(AbstractConnectionManager.java:974) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:792) at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:138) at org.jboss.as.connector.subsystems.datasources.WildFlyDataSource.getConnection(WildFlyDataSource.java:64) at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:77) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.doExecute(PostgresDbEngineDialect.java:118) [dal.jar:] at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:135) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:105) [dal.jar:] at org.ovirt.engine.core.dao.VmDynamicDaoImpl.getAllRunningForVds(VmDynamicDaoImpl.java:52) [dal.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.isVmRunningOnHost(HostNetworkTopologyPersisterImpl.java:210) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.logChangedDisplayNetwork(HostNetworkTopologyPersisterImpl.java:179) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.auditNetworkCompliance(HostNetworkTopologyPersisterImpl.java:148) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.lambda$persistAndEnforceNetworkCompliance$0(HostNetworkTopologyPersisterImpl.java:100) [vdsbroker.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:202) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInRequired(TransactionSupport.java:137) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:105) [utils.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance(HostNetworkTopologyPersisterImpl.java:93) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance(HostNetworkTopologyPersisterImpl.java:154) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.processRefreshCapabilitiesResponse(VdsManager.java:794) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.handleRefreshCapabilitiesResponse(VdsManager.java:598) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.refreshHostSync(VdsManager.java:567) [vdsbroker.jar:] at org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand.executeCommand(RefreshHostCapabilitiesCommand.java:41) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1133) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1285) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1934) [bll.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103) [utils.jar:] at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1345) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:400) [bll.jar:] at org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:468) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:450) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:393) [bll.jar:] at sun.reflect.GeneratedMethodAccessor158.invoke(Unknown Source) [:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161] at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:78) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:88) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:101) at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) [wildfly-ee-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:40) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53) at org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:264) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:379) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:244) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509) at org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438) at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:609) at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:57) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53) at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198) at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185) at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:81) at org.ovirt.engine.core.bll.interfaces.BackendInternal$$$view4.runInternalAction(Unknown Source) [bll.jar:] at sun.reflect.GeneratedMethodAccessor157.invoke(Unknown Source) [:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161] at org.jboss.weld.util.reflection.Reflections.invokeAndUnwrap(Reflections.java:433) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.EnterpriseBeanProxyMethodHandler.invoke(EnterpriseBeanProxyMethodHandler.java:127) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke(EnterpriseTargetBeanInstance.java:56) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.InjectionPointPropagatingEnterpriseTargetBeanInstance.invoke(InjectionPointPropagatingEnterpriseTargetBeanInstance.java:67) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:100) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.ovirt.engine.core.bll.BackendCommandObjectsHandler$BackendInternal$BackendLocal$2049259618$Proxy$_$$_Weld$EnterpriseProxy$.runInternalAction(Unknown Source) [bll.jar:] at org.ovirt.engine.core.bll.VdsEventListener.refreshHostCapabilities(VdsEventListener.java:598) [bll.jar:] at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext(HostConnectionRefresher.java:47) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext(HostConnectionRefresher.java:30) [vdsbroker.jar:] at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$EventCallable.call(EventPublisher.java:118) [vdsm-jsonrpc-java-client.jar:] at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$EventCallable.call(EventPublisher.java:93) [vdsm-jsonrpc-java-client.jar:] at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) [rt.jar:1.8.0_161] 2018-05-02 14:40:23,851-07 WARN [com.arjuna.ats.arjuna] (ForkJoinPool-1-worker-14) ARJUNA012077: Abort called on already aborted atomic action 0:ffff7f000001:-21bd8800:5ae90c48:10afa And the ENGINE log: 2018-05-02 14:40:23,851-07 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (ForkJoinPool-1-worker-14) [52276df5] transaction rolled back 2018-05-02 14:40:23,851-07 ERROR [org.ovirt.engine.core.vdsbroker.VdsManager] (ForkJoinPool-1-worker-14) [52276df5] Unable to RefreshCapabilities: IllegalStateException: Transaction Local transaction (delegate=TransactionImple < ac, BasicAction: 0:ffff7f000001:-21bd8800:5ae90c48:10afa status: ActionStatus.ABORTED >, owner=Local transaction context for provider JBoss JTA transaction provider) is not active STATUS_ROLLEDBACK 2018-05-02 14:40:23,888-07 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (ForkJoinPool-1-worker-14) [5c511e51] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:23,895-07 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:23,898-07 INFO [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Refresh host capabilities finished. Lock released. Monitoring can run now for host 'ovnode102 from data-center 'Default' 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Command 'org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand' failed: Could not get JDBC Connection; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000457: Unchecked throwable in managedConnectionReconnected() cl=org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null] 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Exception: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000457: Unchecked throwable in managedConnectionReconnected() cl=org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null] at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) [dal.jar:] . . . . 2018-05-02 14:40:23,907-07 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-14) [2a0ec90b] EVENT_ID: HOST_REFRESH_CAPABILITIES_FAILED(607), Failed to refresh the capabilities of host ovnode102. 2018-05-02 14:40:23,907-07 INFO [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Lock freed to object 'EngineLock:{exclusiveLocks='[74dfe965-cb11-495a-96a0-3dae6b3cbd75=VDS, HOST_NETWORK74dfe965-cb11-495a-96a0-3dae6b3cbd75=HOST_NETWORK]', sharedLocks=''}' 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] START, GetHardwareInfoAsyncVDSCommand(HostName = ovnode102, VdsIdAndVdsVDSCommandParametersBase:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', vds='Host[ovnode102,74dfe965-cb11-495a-96a0-3dae6b3cbd75]'}), log id: 300f7345 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] FINISH, GetHardwareInfoAsyncVDSCommand, log id: 300f7345 2018-05-02 14:40:25,802-07 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:25,805-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] START, SetVdsStatusVDSCommand(HostName = ovnode102., SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', status='NonOperational', nonOperationalReason='NETWORK_UNREACHABLE', stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 7611d8d8 2018-05-02 14:40:56,722-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' 2018-05-02 14:40:56,732-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Running command: SyncNetworkProviderCommand internal: true. 2018-05-02 14:40:56,844-07 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-40) [] User admin at internal successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2018-05-02 14:40:57,001-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Lock freed to object 'EngineLock:{exclusiveLocks='[f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'default' is using 0 threads out of 1 and 5 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 8 threads waiting for tasks and 0 tasks in queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineScheduled' is using 1 threads out of 100 and 99 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1 and 0 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5 and 1 tasks are waiting in the queue. From: Yanir Quinn [mailto:yquinn at redhat.com] Sent: Wednesday, May 2, 2018 12:34 AM To: Justin Zygmont Cc: users at ovirt.org Subject: Re: [ovirt-users] adding a host Hi, What document are you using ? See if you find the needed information here : https://ovirt.org/documentation/admin-guide/chap-Hosts/ For engine related potential errors i recommend also checking the engine.log and in UI check the events section. Regards, Yanir Quinn On Tue, May 1, 2018 at 11:11 PM, Justin Zygmont > wrote: I have tried to add a host to the engine and it just takes forever never working or giving any error message. When I look in the engine?s server.log I see it says the networks are missing. I thought when you install a node and add it to the engine it will add the networks automatically? The docs don?t give much information about this, and I can?t even remove the host through the UI. What steps are required to prepare a node when several vlans are involved? _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From i.am.stack at gmail.com Wed May 2 21:56:51 2018 From: i.am.stack at gmail.com (~Stack~) Date: Wed, 2 May 2018 16:56:51 -0500 Subject: [ovirt-users] Remote DB: How do you set server_version? In-Reply-To: <8D262853-F744-4CFA-976C-E393EC9A9996@squaretrade.com> References: <3f809af7-4e88-68ed-bb65-99a7584ae8a3@gmail.com> <8D262853-F744-4CFA-976C-E393EC9A9996@squaretrade.com> Message-ID: On 05/02/2018 03:26 PM, Jamie Lawrence wrote: > > I've been down this road. Postgres won't lie about its version for you. If you want to do this, you have to patch the Ovirt installer[1]. I stopped trying to use my PG cluster at some point - the relationship between the installer and the product combined with the overly restrictive requirements baked into the installer[2]) makes doing so an ongoing hassle. So I treat Ovirt's PG as an black box; disappointing, considering that we are a very heavy PG shop with a lot of expertise and automation I can't use with Ovirt. > > If nothing has changed (my notes are from a few versions ago), everything you need to correct is in > > /usr/share/ovirt-engine/setup/ovirt_engine_setup/engine_common/constants.py > > Aside from the version, you'll also have to make the knobs for vacuuming match those of your current installation, and I think there was another configurable for something else I'm not remembering right now. > > Be aware that doing so is accepting an ongoing commitment to monkeying with the installer a lot. At one time I thought doing so was the right tradeoff, but it turns out I was wrong. > > -j > > [1] Or you could rebuild PG with a fake version. That option was unavailable here. > [2] Not criticizing, just stating a technical fact. How folks apportion their QA resources is their business. > Yikes! OK. Thanks for the warning. I've got better things to do with my time. I will just skip this part of exploring. :-) Thank you! ~Stack~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From jzygmont at proofpoint.com Wed May 2 22:08:57 2018 From: jzygmont at proofpoint.com (Justin Zygmont) Date: Wed, 2 May 2018 22:08:57 +0000 Subject: [ovirt-users] routing Message-ID: I don't understand why you would want this unless the ovirtnode itself was actually the router, wouldn't you want to only have an IP on the management network, and leave the rest of the VLANS blank so they depend on the router to route the traffic: NIC1 -> ovirt-mgmt - gateway set NIC2 -> VLAN3, VLAN4, etc... https://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks/ Viewing or Editing the Gateway for a Logical Network Users can define the gateway, along with the IP address and subnet mask, for a logical network. This is necessary when multiple networks exist on a host and traffic should be routed through the specified network, rather than the default gateway. If multiple networks exist on a host and the gateways are not defined, return traffic will be routed through the default gateway, which may not reach the intended destination. This would result in users being unable to ping the host. oVirt handles multiple gateways automatically whenever an interface goes up or down. -------------- next part -------------- An HTML attachment was scrubbed... URL: From randyrue at gmail.com Wed May 2 22:41:58 2018 From: randyrue at gmail.com (Rue, Randy) Date: Wed, 2 May 2018 15:41:58 -0700 Subject: [ovirt-users] newbie questions on networking Message-ID: <2ce0b234-5c6f-7dee-3479-21d2fc351f87@gmail.com> Hi All, I'm new to oVirt and have set up a basic cluster of an engine and five hosts, using the quick start and default settings as much as possible. I confess it's taken some heavy flailing to get this far, the docs all seem to be for the previous versions and the latest greatest appears to be significantly different. I now have a working data center / cluster /hosts and a bouncing baby ubuntu server LTS VM. My VM is getting a DHCP address and nameservers from the data center the hosts sit in. But from the VM I can only ping the IP of the host the VM is on. Can't reach the gateway of the local subnet, or anything in the real world. Am I missing some step? the "Quick Start" doesn't say much beyond "The ovirtmgmt Management network is used for this document, however if you wish to create new logical networks see the oVirt Administration Guide." The admin guide has information on creating new networks but I'm not spotting the parts I need to connect my VM to the real world. Or how to attach another network to the host if all NICs are in use. Short Version: * Is some change needed to allow VMs on the ovirtmgmt network to connect to the real world? If so, what? * Is the ovirtmgmt network not meant for "commodity" use, and instead I should have some other network? If so, how do I connect that to the real LAN/WAN, and how do I replace the ovirtmgmt with it? (my hosts each only have two NICs bonded in a pair). Hope to hear from you, Randy in Seattle From lacey.leanne at gmail.com Wed May 2 23:58:18 2018 From: lacey.leanne at gmail.com (Lacey Powers) Date: Wed, 2 May 2018 16:58:18 -0700 Subject: [ovirt-users] Is PostgreSQL 9.5 required now as of oVirt 4.2? Message-ID: Hi All, I have a setup of oVirt on 4.1 that has been working brilliantly for over a year, serving the production workloads at my dayjob without complaint. Originally, I had set it up with a custom PostgreSQL version, 9.6, from the PGDG repositories, since the suggested 9.2 was already quite old, and it allowed me to keep consistent versions of PostgreSQL across all the infrastructure I have. Now that I am trying to upgrade to oVirt 4.2, when I run engine-setup per the directions in the release notes documentation, engine-setup insists on PostgreSQL 9.5 from Software Collections, comparing the postgresql versions and then aborting. I don't see a way to tell it that I have a different running version of PostgreSQL that's greater than 9.5 already. Does this mean that no other versions than 9.5 are supported, and I need to downgrade and use the Software Collections version exclusively? Or is there a custom setting that I am missing that will enable me to continue using the 9.6 install I already have. Thank you for your time. Best, Lacey From didi at redhat.com Thu May 3 06:53:35 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Thu, 3 May 2018 09:53:35 +0300 Subject: [ovirt-users] Remote DB: How do you set server_version? In-Reply-To: References: <3f809af7-4e88-68ed-bb65-99a7584ae8a3@gmail.com> <8D262853-F744-4CFA-976C-E393EC9A9996@squaretrade.com> Message-ID: On Thu, May 3, 2018 at 12:13 AM, Roy Golan wrote: > > > On Wed, 2 May 2018 at 23:27 Jamie Lawrence > wrote: >> >> >> I've been down this road. Postgres won't lie about its version for you. >> If you want to do this, you have to patch the Ovirt installer[1]. I stopped >> trying to use my PG cluster at some point - the relationship between the >> installer and the product combined with the overly restrictive requirements >> baked into the installer[2]) makes doing so an ongoing hassle. So I treat >> Ovirt's PG as an black box; disappointing, considering that we are a very >> heavy PG shop with a lot of expertise and automation I can't use with Ovirt. Sorry about that, but not sure it's such a bad choice. >> >> If nothing has changed (my notes are from a few versions ago), everything >> you need to correct is in >> >> >> /usr/share/ovirt-engine/setup/ovirt_engine_setup/engine_common/constants.py >> >> Aside from the version, you'll also have to make the knobs for vacuuming >> match those of your current installation, and I think there was another >> configurable for something else I'm not remembering right now. >> >> Be aware that doing so is accepting an ongoing commitment to monkeying >> with the installer a lot. At one time I thought doing so was the right >> tradeoff, but it turns out I was wrong. >> >> -j >> >> [1] Or you could rebuild PG with a fake version. That option was >> unavailable here. >> [2] Not criticizing, just stating a technical fact. How folks apportion >> their QA resources is their business. >> >> > On May 2, 2018, at 12:49 PM, ~Stack~ wrote: >> > >> > Greetings, >> > >> > Exploring hosting my engine and ovirt_engine_history db's on my >> > dedicated PostgreSQL server. >> > >> > This is a 9.5 install on a beefy box from the postgresql.org yum repos >> > that I'm using for other SQL needs too. 9.5.12 to be exact. I set up the >> > database just as the documentation says and I'm doing a fresh install of >> > my engine-setup. >> > >> > During the install, right after I give it the details for the remote I >> > get this error: >> > [ ERROR ] Please set: >> > server_version = 9.5.9 >> > in postgresql.conf on 'None'. Its location is usually >> > /var/lib/pgsql/data , or somewhere under /etc/postgresql* . >> > >> > Huh? >> > > > > Yes it's annoying and I think +Yaniv Dary opened a bug for it after both of > got mad at it. Yaniv? Yaniv did, and I asked for details. Comments are welcome: https://bugzilla.redhat.com/show_bug.cgi?id=1573091 Of course, if it's so annoying, and we are so confident in PG's compatibility inside z-stream, we can simply lax the test by checking only x.y but changing no other functionality, and discuss something stronger later on (if at all). Pushed this for now, didn't verify: https://gerrit.ovirt.org/90866 Ideally, "verification" isn't merely checking that it works as expected, but also coming up with means to enhance our confidence that it's indeed safe. But it might not be such a big risk to merge this anyway, even for 4.2. > > Meanwhile let us know if you were able to patch constants.py as suggested. > >> > Um. OK. >> > $ grep ^server_version postgresql.conf >> > server_version = 9.5.9 >> > >> > $ systemctl restart postgresql-9.5.service >> > >> > LOG: syntax error in file "/var/lib/pgsql/9.5/data/postgresql.conf" >> > line 33, n...n ".9" >> > FATAL: configuration file "/var/lib/pgsql/9.5/data/postgresql.conf" >> > contains errors >> > >> > >> > Well that didn't work. Let's try something else. >> > >> > $ grep ^server_version postgresql.conf >> > server_version = 9.5.9 >> > >> > $ systemctl restart postgresql-9.5.service >> > LOG: parameter "server_version" cannot be changed >> > FATAL: configuration file "/var/lib/pgsql/9.5/data/postgresql.conf" >> > contains errors >> > >> > Whelp. That didn't work either. I can't seem to find anything in the >> > oVirt docs on setting this. >> > >> > How am I supposed to do this? >> > >> > Thanks! >> > ~Stack~ >> > >> > _______________________________________________ >> > Users mailing list >> > Users at ovirt.org >> > http://lists.ovirt.org/mailman/listinfo/users >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Didi From didi at redhat.com Thu May 3 06:59:45 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Thu, 3 May 2018 09:59:45 +0300 Subject: [ovirt-users] Is PostgreSQL 9.5 required now as of oVirt 4.2? In-Reply-To: References: Message-ID: On Thu, May 3, 2018 at 2:58 AM, Lacey Powers wrote: > Hi All, > > I have a setup of oVirt on 4.1 that has been working brilliantly for > over a year, serving the production workloads at my dayjob without > complaint. > > Originally, I had set it up with a custom PostgreSQL version, 9.6, from > the PGDG repositories, since the suggested 9.2 was already quite old, > and it allowed me to keep consistent versions of PostgreSQL across all > the infrastructure I have. > > Now that I am trying to upgrade to oVirt 4.2, when I run engine-setup > per the directions in the release notes documentation, engine-setup > insists on PostgreSQL 9.5 from Software Collections, comparing the > postgresql versions and then aborting. > > I don't see a way to tell it that I have a different running version of > PostgreSQL that's greater than 9.5 already. > > Does this mean that no other versions than 9.5 are supported, and I need > to downgrade and use the Software Collections version exclusively? > > Or is there a custom setting that I am missing that will enable me to > continue using the 9.6 install I already have. > > Thank you for your time. It's not supported out-of-the-box, but a simple workaround exists: http://lists.ovirt.org/pipermail/users/2018-March/087573.html We should probably document this somewhere more approachable... Best regards, -- Didi From ykaul at redhat.com Thu May 3 07:42:54 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 3 May 2018 10:42:54 +0300 Subject: [ovirt-users] Remote DB: How do you set server_version? In-Reply-To: <8D262853-F744-4CFA-976C-E393EC9A9996@squaretrade.com> References: <3f809af7-4e88-68ed-bb65-99a7584ae8a3@gmail.com> <8D262853-F744-4CFA-976C-E393EC9A9996@squaretrade.com> Message-ID: On Wed, May 2, 2018 at 11:26 PM, Jamie Lawrence wrote: > > I've been down this road. Postgres won't lie about its version for you. > If you want to do this, you have to patch the Ovirt installer[1]. I stopped > trying to use my PG cluster at some point - the relationship between the > installer and the product combined with the overly restrictive requirements > baked into the installer[2]) makes doing so an ongoing hassle. So I treat > Ovirt's PG as an black box; disappointing, considering that we are a very > heavy PG shop with a lot of expertise and automation I can't use with Ovirt. > > Patches are welcome to improve the way oVirt uses Postgresql, supports various versions, etc. Can you give examples for some of the things you'd do differently? If nothing has changed (my notes are from a few versions ago), everything > you need to correct is in > > /usr/share/ovirt-engine/setup/ovirt_engine_setup/engine_ > common/constants.py > > Aside from the version, you'll also have to make the knobs for vacuuming > match those of your current installation, and I think there was another > configurable for something else I'm not remembering right now. > > Be aware that doing so is accepting an ongoing commitment to monkeying > with the installer a lot. At one time I thought doing so was the right > tradeoff, but it turns out I was wrong. > At least we can start with documenting their location and default values - if that's not the case already, and when/why change them. An ovirt.org blog entry would be welcome too. Y. > -j > > [1] Or you could rebuild PG with a fake version. That option was > unavailable here. > [2] Not criticizing, just stating a technical fact. How folks apportion > their QA resources is their business. > > > On May 2, 2018, at 12:49 PM, ~Stack~ wrote: > > > > Greetings, > > > > Exploring hosting my engine and ovirt_engine_history db's on my > > dedicated PostgreSQL server. > > > > This is a 9.5 install on a beefy box from the postgresql.org yum repos > > that I'm using for other SQL needs too. 9.5.12 to be exact. I set up the > > database just as the documentation says and I'm doing a fresh install of > > my engine-setup. > > > > During the install, right after I give it the details for the remote I > > get this error: > > [ ERROR ] Please set: > > server_version = 9.5.9 > > in postgresql.conf on 'None'. Its location is usually > > /var/lib/pgsql/data , or somewhere under /etc/postgresql* . > > > > Huh? > > > > Um. OK. > > $ grep ^server_version postgresql.conf > > server_version = 9.5.9 > > > > $ systemctl restart postgresql-9.5.service > > > > LOG: syntax error in file "/var/lib/pgsql/9.5/data/postgresql.conf" > > line 33, n...n ".9" > > FATAL: configuration file "/var/lib/pgsql/9.5/data/postgresql.conf" > > contains errors > > > > > > Well that didn't work. Let's try something else. > > > > $ grep ^server_version postgresql.conf > > server_version = 9.5.9 > > > > $ systemctl restart postgresql-9.5.service > > LOG: parameter "server_version" cannot be changed > > FATAL: configuration file "/var/lib/pgsql/9.5/data/postgresql.conf" > > contains errors > > > > Whelp. That didn't work either. I can't seem to find anything in the > > oVirt docs on setting this. > > > > How am I supposed to do this? > > > > Thanks! > > ~Stack~ > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.lauriou at irisa.fr Thu May 3 07:58:33 2018 From: arnaud.lauriou at irisa.fr (Arnaud Lauriou) Date: Thu, 3 May 2018 09:58:33 +0200 Subject: [ovirt-users] Can't switch ovirt host to maintenance mode : image transfer in progress In-Reply-To: References: <9e7b35a7-5647-f86b-3e26-0f2fac40d556@irisa.fr> Message-ID: Hi, No upload/download are in progress. I found old entries in ovirt-engine logs about one year ago. Upload for those disks had already been cancelled. Examples for disk a1476ae5-990d-45a7-90bd-c2553f8d08d3 : 2017-05-03 11:38:48,318+02 INFO [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (org.ovirt.thread.pool-6-thread-15) [36f2c7a5-af4b-4565-98c6-033bd95097c8] Updating image upload f9f86126-5aa9-4387-9d13-f4c4e3eebf6d (image a1476ae5-990d-45a7-90bd-c2553f8d08d3) phase to Cancelled (message: 'Pausing due to client error') 2017-05-03 11:39:38,129+02 INFO [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (default task-67) [f1052bb9-9196-49c6-b54b-5bd27dde2068] Lock Acquired to object 'EngineLock:{exclusiveLocks='[a1476ae5-990d-45a7-90bd-c2553f8d08d3=]', sharedLocks='null'}' 2017-05-03 11:39:38,159+02 INFO [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (org.ovirt.thread.pool-6-thread-3) [f1052bb9-9196-49c6-b54b-5bd27dde2068] Running command: RemoveDiskCommand internal: false. Entities affected : ID: a1476ae5-990d-45a7-90bd-c2553f8d08d3 Type: DiskAction group DELETE_DISK with role type USER 2017-05-03 11:39:38,220+02 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (org.ovirt.thread.pool-6-thread-3) [6dc02b63] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{runAsync='true', storagePoolId='344cbcab-2ec4-46ac-b99a-dda4ff7d3e78', ignoreFailoverLimit='false', storageDomainId='87f2500e-5a17-40b6-b95a-c0b830a499af', imageGroupId='a1476ae5-990d-45a7-90bd-c2553f8d08d3', postZeros='false', discard='false', forceDelete='false'}), log id: 127a5b4a 2017-05-03 11:39:39,266+02 INFO [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (org.ovirt.thread.pool-6-thread-3) [6dc02b63] Lock freed to object 'EngineLock:{exclusiveLocks='[a1476ae5-990d-45a7-90bd-c2553f8d08d3=]', sharedLocks='null'} Why those uploads are display now ? Do I need to run engine-setup again ? How can I remove them definitely ? Regards, Arnaud On 05/02/2018 11:49 AM, Fred Rolland wrote: > Hi, > > Maybe you tried to upload/download images, and these are still running. > Go to the disks tab and see if you have Upload/Download operations in > progress cancel them. > You have the option to cancel in the Download/Upload buttons. > > Regards, > Fred > > On Wed, May 2, 2018 at 11:29 AM, Arnaud Lauriou > > wrote: > > Hi, > > While upgrading host from ovirt 4.1.9 to ovirt 4.2.2, got one > ovirt host which I can't move to maintenance mode for the > following reason : > 2018-05-02 10:16:18,789+02 WARN > [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] > (default task-27) [193c3d45-5b25-4ccf-9d2c-5b792e99b9fe] > Validation of action 'MaintenanceNumberOfVdss' failed for user > admin at internal-authz. Reasons: > VAR__TYPE__HOST,VAR__ACTION__MAINTENANCE,VDS_CANNOT_MAINTENANCE_HOST_WITH_RUNNING_IMAGE_TRANSFERS,$host > xxxxx,$disks > a1476ae5-990d-45a7-90bd-c2553f8d08d3, > b2616eef-bd13-4d9b-a513-52445ebaedb6, > 13152865-2753-407a-a0e1-425e09889d92,$disks_COUNTER 3 > > I don't know what are those 3 images transfers in progress, there > is no VM running on this host. > I try to list old task running on it with the taskcleaner.sh > script on the engine : found nothing. > > How can I delete or remove those entries ? > > Regards, > > Arnaud Lauriou > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yquinn at redhat.com Thu May 3 08:19:24 2018 From: yquinn at redhat.com (Yanir Quinn) Date: Thu, 3 May 2018 11:19:24 +0300 Subject: [ovirt-users] adding a host In-Reply-To: References: Message-ID: Did you try switching the host to maintenance mode first ? What is the state of the data center and how many active hosts do you have now? And did you perform any updates recently or just run a fresh installation ? if so , did you run engine-setup before launching engine ? On Thu, May 3, 2018 at 12:47 AM, Justin Zygmont wrote: > I read this page and it doesn?t help since this is a host that can?t be > removed, the ?remove? button is dimmed out. > > > > This is 4.22 ovirt node, but the host stays in a ?non operational? state. > I notice the logs have a lot of errors, for example: > > > > > > the SERVER log: > > > > 2018-05-02 14:40:23,847-07 WARN [org.jboss.jca.core. > connectionmanager.pool.strategy.OnePool] (ForkJoinPool-1-worker-14) > IJ000609: Attempt to return connection twice: org.jboss.jca.core. > connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL > managed connection=org.jboss.jca.adapters.jdbc.local. > LocalManagedConnection at 3f37cf10 connection handles=0 > lastReturned=1525297223847 lastValidated=1525290267811 > lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core. > connectionmanager.pool.strategy.OnePool at 20550f35 mcp= > SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] > xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 > connectionManager=5bec70d2 warned=false currentXid=null > productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] > txSync=null]: java.lang.Throwable: STACKTRACE > > at org.jboss.jca.core.connectionmanager.pool.mcp. > SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection( > SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:722) > > at org.jboss.jca.core.connectionmanager.pool.mcp. > SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection( > SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:611) > > at org.jboss.jca.core.connectionmanager.pool. > AbstractPool.returnConnection(AbstractPool.java:847) > > at org.jboss.jca.core.connectionmanager.AbstractConnectionManager. > returnManagedConnection(AbstractConnectionManager.java:725) > > at org.jboss.jca.core.connectionmanager.tx. > TxConnectionManagerImpl.managedConnectionDisconnected( > TxConnectionManagerImpl.java:585) > > at org.jboss.jca.core.connectionmanager.AbstractConnectionManager. > disconnectManagedConnection(AbstractConnectionManager.java:988) > > at org.jboss.jca.core.connectionmanager.AbstractConnectionManager. > reconnectManagedConnection(AbstractConnectionManager.java:974) > > at org.jboss.jca.core.connectionmanager.AbstractConnectionManager. > allocateConnection(AbstractConnectionManager.java:792) > > at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection( > WrapperDataSource.java:138) > > at org.jboss.as.connector.subsystems.datasources. > WildFlyDataSource.getConnection(WildFlyDataSource.java:64) > > at org.springframework.jdbc.datasource.DataSourceUtils. > doGetConnection(DataSourceUtils.java:111) [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.datasource.DataSourceUtils. > getConnection(DataSourceUtils.java:77) [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$ > PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) > [dal.jar:] > > at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$ > PostgresSimpleJdbcCall.doExecute(PostgresDbEngineDialect.java:118) > [dal.jar:] > > at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler. > executeImpl(SimpleJdbcCallsHandler.java:135) [dal.jar:] > > at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler. > executeReadList(SimpleJdbcCallsHandler.java:105) [dal.jar:] > > at org.ovirt.engine.core.dao.VmDynamicDaoImpl.getAllRunningForVds(VmDynamicDaoImpl.java:52) > [dal.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.isVmRunningOnHost( > HostNetworkTopologyPersisterImpl.java:210) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.logChangedDisplayNetwork( > HostNetworkTopologyPersisterImpl.java:179) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.auditNetworkCompliance( > HostNetworkTopologyPersisterImpl.java:148) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.lambda$persistAndEnforceNetworkCompli > ance$0(HostNetworkTopologyPersisterImpl.java:100) [vdsbroker.jar:] > > at org.ovirt.engine.core.utils.transaction.TransactionSupport. > executeInNewTransaction(TransactionSupport.java:202) [utils.jar:] > > at org.ovirt.engine.core.utils.transaction.TransactionSupport. > executeInRequired(TransactionSupport.java:137) [utils.jar:] > > at org.ovirt.engine.core.utils.transaction.TransactionSupport. > executeInScope(TransactionSupport.java:105) [utils.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance( > HostNetworkTopologyPersisterImpl.java:93) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance( > HostNetworkTopologyPersisterImpl.java:154) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.VdsManager. > processRefreshCapabilitiesResponse(VdsManager.java:794) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.VdsManager. > handleRefreshCapabilitiesResponse(VdsManager.java:598) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.VdsManager. > refreshHostSync(VdsManager.java:567) [vdsbroker.jar:] > > at org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand > .executeCommand(RefreshHostCapabilitiesCommand.java:41) [bll.jar:] > > at org.ovirt.engine.core.bll.CommandBase. > executeWithoutTransaction(CommandBase.java:1133) [bll.jar:] > > at org.ovirt.engine.core.bll.CommandBase. > executeActionInTransactionScope(CommandBase.java:1285) [bll.jar:] > > at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1934) > [bll.jar:] > > at org.ovirt.engine.core.utils.transaction.TransactionSupport. > executeInSuppressed(TransactionSupport.java:164) [utils.jar:] > > at org.ovirt.engine.core.utils.transaction.TransactionSupport. > executeInScope(TransactionSupport.java:103) [utils.jar:] > > at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1345) > [bll.jar:] > > at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:400) > [bll.jar:] > > at org.ovirt.engine.core.bll.executor. > DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13) > [bll.jar:] > > at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:468) > [bll.jar:] > > at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:450) > [bll.jar:] > > at org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:393) > [bll.jar:] > > at sun.reflect.GeneratedMethodAccessor158.invoke(Unknown Source) > [:1.8.0_161] > > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] > > at java.lang.reflect.Method.invoke(Method.java:498) > [rt.jar:1.8.0_161] > > at org.jboss.as.ee.component.ManagedReferenceMethodIntercep > tor.processInvocation(ManagedReferenceMethodInterceptor.java:52) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.InterceptorContext$Invocation. > proceed(InterceptorContext.java:509) > > at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor. > delegateInterception(Jsr299BindingsInterceptor.java:78) > > at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor. > doMethodInterception(Jsr299BindingsInterceptor.java:88) > > at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor. > processInvocation(Jsr299BindingsInterceptor.java:101) > > at org.jboss.as.ee.component.interceptors. > UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.invocationmetrics. > ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor. > processInvocation(ConcurrentContextInterceptor.java:45) > [wildfly-ee-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.InitialInterceptor.processInvocation( > InitialInterceptor.java:40) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.ChainedInterceptor.processInvocation( > ChainedInterceptor.java:53) > > at org.jboss.as.ee.component.interceptors. > ComponentDispatcherInterceptor.processInvocation( > ComponentDispatcherInterceptor.java:52) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.singleton. > SingletonComponentInstanceAssociationInterceptor.processInvocation( > SingletonComponentInstanceAssociationInterceptor.java:53) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:264) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:379) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:244) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.InterceptorContext$Invocation. > proceed(InterceptorContext.java:509) > > at org.jboss.weld.ejb.AbstractEJBRequestScopeActivat > ionInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > > at org.jboss.as.weld.ejb.EjbRequestScopeActivationInter > ceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.interceptors. > CurrentInvocationContextInterceptor.processInvocation( > CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-11.0.0.Final. > jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.invocationmetrics. > WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.security.SecurityContextInterceptor. > processInvocation(SecurityContextInterceptor.java:100) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.deployment.processors. > StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.interceptors. > ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor. > processInvocation(LoggingInterceptor.java:67) [wildfly-ejb3-11.0.0.Final. > jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ee.component.NamespaceContextInterceptor. > processInvocation(NamespaceContextInterceptor.java:50) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.ContextClassLoaderInterceptor. > processInvocation(ContextClassLoaderInterceptor.java:60) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.InterceptorContext.run( > InterceptorContext.java:438) > > at org.wildfly.security.manager.WildFlySecurityManager.doChecked( > WildFlySecurityManager.java:609) > > at org.jboss.invocation.AccessCheckingInterceptor. > processInvocation(AccessCheckingInterceptor.java:57) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.ChainedInterceptor.processInvocation( > ChainedInterceptor.java:53) > > at org.jboss.as.ee.component.ViewService$View.invoke( > ViewService.java:198) > > at org.jboss.as.ee.component.ViewDescription$1.processInvocation( > ViewDescription.java:185) > > at org.jboss.as.ee.component.ProxyInvocationHandler.invoke( > ProxyInvocationHandler.java:81) > > at org.ovirt.engine.core.bll.interfaces.BackendInternal$$$ > view4.runInternalAction(Unknown Source) [bll.jar:] > > at sun.reflect.GeneratedMethodAccessor157.invoke(Unknown Source) > [:1.8.0_161] > > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] > > at java.lang.reflect.Method.invoke(Method.java:498) > [rt.jar:1.8.0_161] > > at org.jboss.weld.util.reflection.Reflections. > invokeAndUnwrap(Reflections.java:433) [weld-core-impl-2.4.3.Final. > jar:2.4.3.Final] > > at org.jboss.weld.bean.proxy.EnterpriseBeanProxyMethodHandl > er.invoke(EnterpriseBeanProxyMethodHandler.java:127) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > > at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke( > EnterpriseTargetBeanInstance.java:56) [weld-core-impl-2.4.3.Final. > jar:2.4.3.Final] > > at org.jboss.weld.bean.proxy.InjectionPointPropagatingEnter > priseTargetBeanInstance.invoke(InjectionPointPropagatingEnter > priseTargetBeanInstance.java:67) [weld-core-impl-2.4.3.Final. > jar:2.4.3.Final] > > at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:100) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > > at org.ovirt.engine.core.bll.BackendCommandObjectsHandler$ > BackendInternal$BackendLocal$2049259618$Proxy$_$$_Weld$EnterpriseProxy$.runInternalAction(Unknown > Source) [bll.jar:] > > at org.ovirt.engine.core.bll.VdsEventListener. > refreshHostCapabilities(VdsEventListener.java:598) [bll.jar:] > > at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$ > SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext( > HostConnectionRefresher.java:47) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$ > SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext( > HostConnectionRefresher.java:30) [vdsbroker.jar:] > > at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$ > EventCallable.call(EventPublisher.java:118) [vdsm-jsonrpc-java-client.jar: > ] > > at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$ > EventCallable.call(EventPublisher.java:93) [vdsm-jsonrpc-java-client.jar:] > > at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424) > [rt.jar:1.8.0_161] > > at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) > [rt.jar:1.8.0_161] > > at java.util.concurrent.ForkJoinPool$WorkQueue. > runTask(ForkJoinPool.java:1056) [rt.jar:1.8.0_161] > > at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) > [rt.jar:1.8.0_161] > > at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) > [rt.jar:1.8.0_161] > > > > 2018-05-02 14:40:23,851-07 WARN [com.arjuna.ats.arjuna] > (ForkJoinPool-1-worker-14) ARJUNA012077: Abort called on already aborted > atomic action 0:ffff7f000001:-21bd8800:5ae90c48:10afa > > > > > > > > > > > > And the ENGINE log: > > > > 2018-05-02 14:40:23,851-07 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] > (ForkJoinPool-1-worker-14) [52276df5] transaction rolled back > > 2018-05-02 14:40:23,851-07 ERROR [org.ovirt.engine.core.vdsbroker.VdsManager] > (ForkJoinPool-1-worker-14) [52276df5] Unable to RefreshCapabilities: > IllegalStateException: Transaction Local transaction > (delegate=TransactionImple < ac, BasicAction: 0:ffff7f000001:-21bd8800:5ae90c48:10afa > status: ActionStatus.ABORTED >, owner=Local transaction context for > provider JBoss JTA transaction provider) is not active STATUS_ROLLEDBACK > > 2018-05-02 14:40:23,888-07 INFO [org.ovirt.engine.core.bll. > HandleVdsCpuFlagsOrClusterChangedCommand] (ForkJoinPool-1-worker-14) > [5c511e51] Running command: HandleVdsCpuFlagsOrClusterChangedCommand > internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 > Type: VDS > > 2018-05-02 14:40:23,895-07 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] > (ForkJoinPool-1-worker-14) [2a0ec90b] Running command: > HandleVdsVersionCommand internal: true. Entities affected : ID: > 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS > > 2018-05-02 14:40:23,898-07 INFO [org.ovirt.engine.core.bll. > RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] > Refresh host capabilities finished. Lock released. Monitoring can run now > for host 'ovnode102 from data-center 'Default' > > 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll. > RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] > Command 'org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand' > failed: Could not get JDBC Connection; nested exception is > java.sql.SQLException: javax.resource.ResourceException: IJ000457: > Unchecked throwable in managedConnectionReconnected() cl=org.jboss.jca.core. > connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL > managed connection=org.jboss.jca.adapters.jdbc.local. > LocalManagedConnection at 3f37cf10 connection handles=0 > lastReturned=1525297223847 lastValidated=1525290267811 > lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core. > connectionmanager.pool.strategy.OnePool at 20550f35 mcp= > SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] > xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 > connectionManager=5bec70d2 warned=false currentXid=null > productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] > txSync=null] > > 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll. > RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] > Exception: org.springframework.jdbc.CannotGetJdbcConnectionException: > Could not get JDBC Connection; nested exception is java.sql.SQLException: > javax.resource.ResourceException: IJ000457: Unchecked throwable in > managedConnectionReconnected() cl=org.jboss.jca.core. > connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL > managed connection=org.jboss.jca.adapters.jdbc.local. > LocalManagedConnection at 3f37cf10 connection handles=0 > lastReturned=1525297223847 lastValidated=1525290267811 > lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core. > connectionmanager.pool.strategy.OnePool at 20550f35 mcp= > SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] > xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 > connectionManager=5bec70d2 warned=false currentXid=null > productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] > txSync=null] > > at org.springframework.jdbc.datasource.DataSourceUtils. > getConnection(DataSourceUtils.java:80) [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$ > PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) > [dal.jar:] > > . > > . > > . > > . > > 2018-05-02 14:40:23,907-07 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-14) > [2a0ec90b] EVENT_ID: HOST_REFRESH_CAPABILITIES_FAILED(607), Failed to > refresh the capabilities of host ovnode102. > > 2018-05-02 14:40:23,907-07 INFO [org.ovirt.engine.core.bll. > RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] > Lock freed to object 'EngineLock:{exclusiveLocks='[ > 74dfe965-cb11-495a-96a0-3dae6b3cbd75=VDS, HOST_NETWORK74dfe965-cb11- > 495a-96a0-3dae6b3cbd75=HOST_NETWORK]', sharedLocks=''}' > > 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] START, > GetHardwareInfoAsyncVDSCommand(HostName = ovnode102, > VdsIdAndVdsVDSCommandParametersBase:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', > vds='Host[ovnode102,74dfe965-cb11-495a-96a0-3dae6b3cbd75]'}), log id: > 300f7345 > > 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] FINISH, > GetHardwareInfoAsyncVDSCommand, log id: 300f7345 > > 2018-05-02 14:40:25,802-07 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] Running > command: SetNonOperationalVdsCommand internal: true. Entities affected : > ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS > > 2018-05-02 14:40:25,805-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] START, > SetVdsStatusVDSCommand(HostName = ovnode102., > SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', > status='NonOperational', nonOperationalReason='NETWORK_UNREACHABLE', > stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 7611d8d8 > > 2018-05-02 14:40:56,722-07 INFO [org.ovirt.engine.core.bll. > provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) > [33bdda7f] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ > f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' > > 2018-05-02 14:40:56,732-07 INFO [org.ovirt.engine.core.bll. > provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) > [33bdda7f] Running command: SyncNetworkProviderCommand internal: true. > > 2018-05-02 14:40:56,844-07 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] > (default task-40) [] User admin at internal successfully logged in with > scopes: ovirt-app-api ovirt-ext=token-info:authz-search > ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate > ovirt-ext=token:password-access > > 2018-05-02 14:40:57,001-07 INFO [org.ovirt.engine.core.bll. > provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) > [33bdda7f] Lock freed to object 'EngineLock:{exclusiveLocks='[ > f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'default' is using 0 threads out of 1 and 5 tasks are waiting in the queue. > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'engine' is using 0 threads out of 500, 8 threads waiting for tasks and 0 > tasks in queue. > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'engineScheduled' is using 1 threads out of 100 and 99 tasks are waiting in > the queue. > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'engineThreadMonitoring' is using 1 threads out of 1 and 0 tasks are > waiting in the queue. > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'hostUpdatesChecker' is using 0 threads out of 5 and 1 tasks are waiting in > the queue. > > > > > > > > > > > > > > > > > > > > > > > > *From:* Yanir Quinn [mailto:yquinn at redhat.com] > *Sent:* Wednesday, May 2, 2018 12:34 AM > *To:* Justin Zygmont > *Cc:* users at ovirt.org > *Subject:* Re: [ovirt-users] adding a host > > > > Hi, > > What document are you using ? > > See if you find the needed information here : https://ovirt.org/ > documentation/admin-guide/chap-Hosts/ > > > > > For engine related potential errors i recommend also checking the > engine.log and in UI check the events section. > > Regards, > > Yanir Quinn > > > > On Tue, May 1, 2018 at 11:11 PM, Justin Zygmont > wrote: > > I have tried to add a host to the engine and it just takes forever never > working or giving any error message. When I look in the engine?s > server.log I see it says the networks are missing. > > I thought when you install a node and add it to the engine it will add the > networks automatically? The docs don?t give much information about this, > and I can?t even remove the host through the UI. What steps are required > to prepare a node when several vlans are involved? > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsswzt at pku.edu.cn Thu May 3 08:45:33 2018 From: gsswzt at pku.edu.cn (gsswzt at pku.edu.cn) Date: Thu, 3 May 2018 16:45:33 +0800 Subject: [ovirt-users] Pass through disk can't R/W Message-ID: <201805031645332679264@pku.edu.cn> Hi, I found my VM libvirt create xml as:
I found https://libvirt.org/formatdomain.html#elementsHostDevSubsys like blow: ...
...Is
mandatory? -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.coter at gmail.com Thu May 3 09:40:07 2018 From: simon.coter at gmail.com (Simon Coter) Date: Thu, 3 May 2018 11:40:07 +0200 Subject: [ovirt-users] oVirt and Oracle Linux support Message-ID: Hi, I'm new to this ML. I started to play a bit with oVirt and I saw that you actually support RH and CentOS. Is there any plan to also support Oracle Linux ? I had to fight so much to get everything on OL correctly running. Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Thu May 3 09:44:02 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Thu, 3 May 2018 11:44:02 +0200 Subject: [ovirt-users] oVirt and Oracle Linux support In-Reply-To: References: Message-ID: On Thu, May 3, 2018 at 11:40 AM, Simon Coter wrote: > Hi, > > I'm new to this ML. > I started to play a bit with oVirt and I saw that you actually support RH > and CentOS. > Is there any plan to also support Oracle Linux ? > I had to fight so much to get everything on OL correctly running. Hello Simon, you mean OL as guest? I don't think there should be so much problems, because OL is a custom version of RHEL/CentOS. Which kind of problems did you get? Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From simon.coter at gmail.com Thu May 3 09:47:12 2018 From: simon.coter at gmail.com (Simon Coter) Date: Thu, 3 May 2018 11:47:12 +0200 Subject: [ovirt-users] oVirt and Oracle Linux support In-Reply-To: References: Message-ID: Hi Luca, no, I mean OL as an host, compute-node managed by oVirt. As a guest I hadn't any kind of problem. Thanks Simon On Thu, May 3, 2018 at 11:44 AM, Luca 'remix_tj' Lorenzetto < lorenzetto.luca at gmail.com> wrote: > On Thu, May 3, 2018 at 11:40 AM, Simon Coter > wrote: > > Hi, > > > > I'm new to this ML. > > I started to play a bit with oVirt and I saw that you actually support RH > > and CentOS. > > Is there any plan to also support Oracle Linux ? > > I had to fight so much to get everything on OL correctly running. > > Hello Simon, > > you mean OL as guest? I don't think there should be so much problems, > because OL is a custom version of RHEL/CentOS. > > Which kind of problems did you get? > > Luca > > -- > "E' assurdo impiegare gli uomini di intelligenza eccellente per fare > calcoli che potrebbero essere affidati a chiunque se si usassero delle > macchine" > Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) > > "Internet ? la pi? grande biblioteca del mondo. > Ma il problema ? che i libri sono tutti sparsi sul pavimento" > John Allen Paulos, Matematico (1945-vivente) > > Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , < > lorenzetto.luca at gmail.com> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Thu May 3 10:03:56 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Thu, 3 May 2018 12:03:56 +0200 Subject: [ovirt-users] oVirt and Oracle Linux support In-Reply-To: References: Message-ID: On Thu, May 3, 2018 at 11:47 AM, Simon Coter wrote: > Hi Luca, > > no, I mean OL as an host, compute-node managed by oVirt. > As a guest I hadn't any kind of problem. > Thanks Which kind of problems? I think there should be not so much if you're using OL 7. I've been using OL7 in the past and we integrated with no issues in the same lifecycles tools we're using for RHEL7 Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From ahadas at redhat.com Thu May 3 10:29:21 2018 From: ahadas at redhat.com (Arik Hadas) Date: Thu, 3 May 2018 13:29:21 +0300 Subject: [ovirt-users] Pass through disk can't R/W In-Reply-To: <201805031645332679264@pku.edu.cn> References: <201805031645332679264@pku.edu.cn> Message-ID: On Thu, May 3, 2018 at 11:45 AM, gsswzt at pku.edu.cn wrote: > Hi, > > I found my VM libvirt create xml as: > > > > >
> > > > I found https://libvirt.org/formatdomain.html#elementsHostDevSubsys like > blow: > > ... > > > > >
> > >
> > > ... > > Is
mandatory? > > No, it is not mandatory. If not set, it would be determined by libvirt. See [1]. [1] https://ovirt.org/develop/release-management/features/virt/stabledeviceaddresses/ > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.betham at googlemail.com Thu May 3 11:07:00 2018 From: mark.betham at googlemail.com (Mark Betham) Date: Thu, 3 May 2018 12:07:00 +0100 Subject: [ovirt-users] Scheduling a Snapshot of a Gluster volume not working within Ovirt Message-ID: Hi Ovirt community, I am hoping you will be able to help with a problem I am experiencing when trying to schedule a snapshot of my Gluster volumes using the Ovirt portal. Below is an overview of the environment; I have an Ovirt instance running which is managing our Gluster storage. We are running Ovirt version "4.2.2.6-1.el7.centos", Gluster version "glusterfs-3.13.2-2.el7" on a base OS of "CentOS Linux release 7.4.1708 (Core)", Kernel "3.10.0 - 693.21.1.el7.x86_64", VDSM version "vdsm-4.20.23-1.el7.centos". All of the versions of software are the latest release and have been fully patched where necessary. Ovirt has been installed and configured in "Gluster" mode only, no virtualisation. The Ovirt platform runs from one of the Gluster storage nodes. Gluster runs with 2 clusters, each located at a different physical site (UK and DE). Each of the storage clusters contain 3 storage nodes. Each storage cluster contains a single gluster volume. The Gluster volume is 3 * Replicated. The Gluster volume runs on top of a LVM thin vol which has been provisioned with a XFS filesystem. The system is running a Geo-rep between the 2 geo-diverse clusters. The host servers running at the primary site are of specification 1 * Intel(R) Xeon(R) CPU E3-1270 v5 @ 3.60GHz (8 core with HT), 64GB Ram, LSI MegaRAID SAS 9271 with bbu and cache, 8 * SAS 10K 2.5" 1.8TB enterprise drives configured in a RAID 10 array to give 6.52TB of useable space. The host servers running at the secondary site are of specification 1 * Intel(R) Xeon(R) CPU E3-1271 v3 @ 3.60GHz (8 core with HT), 32GB Ram, LSI MegaRAID SAS 9260 with bbu and cache, 8 * SAS 10K 2.5" 1.8TB enterprise drives configured in a RAID 10 array to give 6.52TB of useable space. The secondary site is for DR use only. When I first starting experiencing the issue and was unable to resolve it, I carried out a full rebuild from scratch across the two storage clusters. I had spent some time troubleshooting the issue but felt it worthwhile to ensure I had a clean platform, void of any potential issues which may be there due to some of the previous work carried out. The platform was rebuilt and data re-ingested. It is probably worth mentioning that this environment will become our new production platform, we will be migrating data and services to this new platform from our existing Gluster storage cluster. The date for the migration activity is getting closer so available time has become an issue and will not permit another full rebuild of the platform without impacting delivery date. After the rebuild with both storage clusters online, available and managed within the Ovirt platform I conducted some basic commissioning checks and I found no issues. The next step I took at this point was to setup the Geo-replication. This was brought online with no issues and data was seen to be synchronised without any problems. At this point the data re-ingestion was started and the new data was synchronised by the Geo-replication. The first step in bringing the snapshot schedule online was to validate that snapshots could be taken outside of the scheduler. Taking a manual snapshot via the OVirt portal worked without issue. Several were taken on both primary and secondary clusters. At this point a schedule was created on the primary site cluster via the Ovirt portal to create a snapshot of the storage at hourly intervals. The schedule was created successfully however no snapshots were ever created. Examining the logs did not show anything which I believed was a direct result of the faulty schedule but it is quite possible I missed something. I reviewed many online articles, bug reports and application manuals in relation to snapshotting. There were several loosely related support articles around snapshotting but none of the recommendations seemed to work. I did the same with manuals and again nothing that seemed to work. What I did find were several references to running snapshots along with geo-replication and that the geo-replication should be paused when creating. So I removed all existing references to any snapshot schedule, paused the Geo-repl and recreated the snapshot schedule. The schedule was never actioned and no snapshots were created. Removed Geo-repl entirely, remove all schedules and carried out a reboot of the entire platform. When the system was fully back online and no pending heal operations the schedule was re-added for the primary site only. No difference in the results and no snapshots were created from the schedule. I have now reached the point where I feel I require assistance and hence this email request. If you require any further data then please let me know and I will do my best to get it for you. Any help you can give would be greatly appreciated. Many thanks, Mark Betham -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.lauriou at irisa.fr Thu May 3 12:32:15 2018 From: arnaud.lauriou at irisa.fr (Arnaud Lauriou) Date: Thu, 3 May 2018 14:32:15 +0200 Subject: [ovirt-users] Can't switch ovirt host to maintenance mode : image transfer in progress In-Reply-To: References: <9e7b35a7-5647-f86b-3e26-0f2fac40d556@irisa.fr> Message-ID: Well I found those entries in the pg engine database : # psql -U engine -W -h localhost -p 5432 Password for user engine: psql (9.2.23, server 9.5.9) WARNING: psql version 9.2, server version 9.5. Some psql features might not work. Type "help" for help. engine=> select message from image_transfers where disk_id='a1476ae5-990d-45a7-90bd-c2553f8d08d3'; message ----------------------------- Pausing due to client error (1 row) It seems that those entries have not been deleted from the database. How can I change the status of those entries ? Do I need to delete them ? Regards, Arnaud Lauriou On 05/03/2018 09:58 AM, Arnaud Lauriou wrote: > Hi, > > No upload/download are in progress. > I found old entries in ovirt-engine logs about one year ago. Upload > for those disks had already been cancelled. > Examples for disk a1476ae5-990d-45a7-90bd-c2553f8d08d3 : > > 2017-05-03 11:38:48,318+02 INFO > [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] > (org.ovirt.thread.pool-6-thread-15) > [36f2c7a5-af4b-4565-98c6-033bd95097c8] Updating image upload > f9f86126-5aa9-4387-9d13-f4c4e3eebf6d (image > a1476ae5-990d-45a7-90bd-c2553f8d08d3) phase to Cancelled (message: > 'Pausing due to client error') > 2017-05-03 11:39:38,129+02 INFO > [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (default > task-67) [f1052bb9-9196-49c6-b54b-5bd27dde2068] Lock Acquired to > object > 'EngineLock:{exclusiveLocks='[a1476ae5-990d-45a7-90bd-c2553f8d08d3= ACTION_TYPE_FAILED_DISK_IS_BEING_REMOVED$DiskName svc-1.1>]', > sharedLocks='null'}' > 2017-05-03 11:39:38,159+02 INFO > [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] > (org.ovirt.thread.pool-6-thread-3) > [f1052bb9-9196-49c6-b54b-5bd27dde2068] Running command: > RemoveDiskCommand internal: false. Entities affected : ID: > a1476ae5-990d-45a7-90bd-c2553f8d08d3 Type: DiskAction group > DELETE_DISK with role type USER > 2017-05-03 11:39:38,220+02 INFO > [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] > (org.ovirt.thread.pool-6-thread-3) [6dc02b63] START, > DeleteImageGroupVDSCommand( > DeleteImageGroupVDSCommandParameters:{runAsync='true', > storagePoolId='344cbcab-2ec4-46ac-b99a-dda4ff7d3e78', > ignoreFailoverLimit='false', > storageDomainId='87f2500e-5a17-40b6-b95a-c0b830a499af', > imageGroupId='a1476ae5-990d-45a7-90bd-c2553f8d08d3', > postZeros='false', discard='false', forceDelete='false'}), log id: > 127a5b4a > 2017-05-03 11:39:39,266+02 INFO > [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] > (org.ovirt.thread.pool-6-thread-3) [6dc02b63] Lock freed to object > 'EngineLock:{exclusiveLocks='[a1476ae5-990d-45a7-90bd-c2553f8d08d3= ACTION_TYPE_FAILED_DISK_IS_BEING_REMOVED$DiskName svc-1.1>]', > sharedLocks='null'} > > Why those uploads are display now ? Do I need to run engine-setup again ? > How can I remove them definitely ? > > Regards, > > Arnaud > > On 05/02/2018 11:49 AM, Fred Rolland wrote: >> Hi, >> >> Maybe you tried to upload/download images, and these are still running. >> Go to the disks tab and see if you have Upload/Download operations in >> progress cancel them. >> You have the option to cancel in the Download/Upload buttons. >> >> Regards, >> Fred >> >> On Wed, May 2, 2018 at 11:29 AM, Arnaud Lauriou >> > wrote: >> >> Hi, >> >> While upgrading host from ovirt 4.1.9 to ovirt 4.2.2, got one >> ovirt host which I can't move to maintenance mode for the >> following reason : >> 2018-05-02 10:16:18,789+02 WARN >> [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] >> (default task-27) [193c3d45-5b25-4ccf-9d2c-5b792e99b9fe] >> Validation of action 'MaintenanceNumberOfVdss' failed for user >> admin at internal-authz. Reasons: >> VAR__TYPE__HOST,VAR__ACTION__MAINTENANCE,VDS_CANNOT_MAINTENANCE_HOST_WITH_RUNNING_IMAGE_TRANSFERS,$host >> xxxxx,$disks >> a1476ae5-990d-45a7-90bd-c2553f8d08d3, >> b2616eef-bd13-4d9b-a513-52445ebaedb6, >> 13152865-2753-407a-a0e1-425e09889d92,$disks_COUNTER 3 >> >> I don't know what are those 3 images transfers in progress, there >> is no VM running on this host. >> I try to list old task running on it with the taskcleaner.sh >> script on the engine : found nothing. >> >> How can I delete or remove those entries ? >> >> Regards, >> >> Arnaud Lauriou >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From lveyde at redhat.com Thu May 3 16:06:00 2018 From: lveyde at redhat.com (Lev Veyde) Date: Thu, 3 May 2018 19:06:00 +0300 Subject: [ovirt-users] [ANN] oVirt 4.2.3 Fourth Release Candidate is now available Message-ID: The oVirt Project is pleased to announce the availability of the oVirt 4.2.3 Fourth Release Candidate, as of May 3rd, 2018 This update is a release candidate of the third in a series of stabilization updates to the 4.2 series. This is pre-release software. This pre-release should not to be used in production. This release is available now for: * Red Hat Enterprise Linux 7.5 or later * CentOS Linux (or similar) 7.4 or later This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.5 or later * CentOS Linux (or similar) 7.4 or later * oVirt Node 4.2 See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed. Notes: - oVirt Appliance is available - oVirt Node is available [2] Additional Resources: * Read more about the oVirt 4.2.3 release highlights: http://www.ovirt.org/release/4.2.3/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4.2.3/ [2] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/ -- Lev Veyde Software Engineer, RHCE | RHCVA | MCITP Red Hat Israel lev at redhat.com | lveyde at redhat.com TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From randyrue at gmail.com Thu May 3 16:30:17 2018 From: randyrue at gmail.com (Rue, Randy) Date: Thu, 3 May 2018 09:30:17 -0700 Subject: [ovirt-users] newbie questions on networking In-Reply-To: <2ce0b234-5c6f-7dee-3479-21d2fc351f87@gmail.com> References: <2ce0b234-5c6f-7dee-3479-21d2fc351f87@gmail.com> Message-ID: <9115b040-d035-d164-944b-e7091516c559@gmail.com> Hi Again, I'm not sure if my first post yesterday went through, I can see it in the list archives but I didn't receive a copy and I've confirmed my list settings include me getting a copy of my own posts. In any case, nobody has replied and unless I'm the only guy that needs my VMs to talk to the rest of the world I assume someone else knows how to fix this. I've read and re-read the Quick Start Guide, Installation Guide and Administration Guide even though they appear to describe an earlier version. If I've overlooked the answer and this is an RTFM issue, feel free to tell me so but I'd be grateful if you'd also tell me exactly which part of the FM to read. Again, my VM is getting an IP address and nameserver settings from the DHCP service running on the server room subnet the oVirt host sits in. From the Vm, I can ping the static IP of the host the vm is on, but not anything else on the server room subnet including the other hosts or the subnet's gateway. The "route" command sits for about 10 seconds before completing but eventually shows two rows, one for default with the correct local gateway and one for the local subnet. All appears to be well on the VM, the problem appears to be the host is not passing traffic. The dialogue for the interface on the host shows some logos on the ovirtmgmt network that's assigned to it, including a green "VM" tile. Is this the "outside" role for commodity connections to a VM? I've also spent some time rooting around different parts of the admin interface and found some settings under the ovirtmgmt network's vNIC Profiles for the "Network Filter." Tried changing that to "allow IPv4" and then to "No Network Filter" with no change. I hope to hear from you soon. randy in Seattle From marceloltmm at gmail.com Thu May 3 17:32:04 2018 From: marceloltmm at gmail.com (Marcelo Leandro) Date: Thu, 3 May 2018 14:32:04 -0300 Subject: [ovirt-users] problem to create snapshot In-Reply-To: References: Message-ID: Anyone help me? 2018-05-02 17:55 GMT-03:00 Marcelo Leandro : > Hello , > > I am geting error when try do a snapshot: > > Error msg in SPM log. > > 2018-05-02 17:46:11,235-0300 WARN (tasks/2) [storage.ResourceManager] > Resource factory failed to create resource '01_img_6e5cce71-3438-4045- > 9d54-607123e0557e.ed7f1c0f-5986-4979-b783-5c465b0854c6'. Canceling > request. (resourceManager:543) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", > line 539, in registerResource > obj = namespaceObj.factory.createResource(name, lockType) > File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", > line 193, in createResource > lockType) > File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", > line 122, in __getResourceCandidatesList > imgUUID=resourceName) > File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line > 213, in getChain > if srcVol.isLeaf(): > File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line > 1430, in isLeaf > return self._manifest.isLeaf() > File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line > 138, in isLeaf > return self.getVolType() == sc.type2name(sc.LEAF_VOL) > File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line > 134, in getVolType > self.voltype = self.getMetaParam(sc.VOLTYPE) > File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line > 118, in getMetaParam > meta = self.getMetadata() > File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", > line 112, in getMetadata > md = VolumeMetadata.from_lines(lines) > File "/usr/lib/python2.7/site-packages/vdsm/storage/volumemetadata.py", > line 103, in from_lines > "Missing metadata key: %s: found: %s" % (e, md)) > MetaDataKeyNotFoundError: Meta Data key not found error: ("Missing > metadata key: 'DOMAIN': found: {'NONE': '############################# > ############################################################ > ############################################################ > ############################################################ > ############################################################ > ############################################################ > ############################################################ > ############################################################ > #####################################################'}",) > 2018-05-02 17:46:11,286-0300 WARN (tasks/2) [storage.ResourceManager.Request] > (ResName='01_img_6e5cce71-3438-4045-9d54-607123e0557e. > ed7f1c0f-5986-4979-b783-5c465b0854c6', ReqID='a3cd9388-977b-45b9-9aa0-e431aeff8750') > Tried to cancel a processed request (resourceManager:187) > 2018-05-02 17:46:11,286-0300 ERROR (tasks/2) [storage.TaskManager.Task] > (Task='ba0766ca-08a1-4d65-a4e9-1e0171939037') Unexpected error (task:875) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, > in _run > return fn(*args, **kargs) > File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336, > in run > return self.cmd(*self.argslist, **self.argsdict) > File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line > 79, in wrapper > return method(self, *args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1938, > in createVolume > with rm.acquireResource(img_ns, imgUUID, rm.EXCLUSIVE): > File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", > line 1025, in acquireResource > return _manager.acquireResource(namespace, name, lockType, > timeout=timeout) > File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", > line 475, in acquireResource > raise se.ResourceAcqusitionFailed() > ResourceAcqusitionFailed: Could not acquire resource. Probably resource > factory threw an exception.: () > > > Anyone help? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlawrence at squaretrade.com Thu May 3 18:42:45 2018 From: jlawrence at squaretrade.com (Jamie Lawrence) Date: Thu, 3 May 2018 11:42:45 -0700 Subject: [ovirt-users] Remote DB: How do you set server_version? In-Reply-To: References: <3f809af7-4e88-68ed-bb65-99a7584ae8a3@gmail.com> <8D262853-F744-4CFA-976C-E393EC9A9996@squaretrade.com> Message-ID: <87830767-AFB7-4AB6-A0E2-A907A296C175@squaretrade.com> > On May 3, 2018, at 12:42 AM, Yaniv Kaul wrote: > Patches are welcome to improve the way oVirt uses Postgresql, supports various versions, etc. > Can you give examples for some of the things you'd do differently? A little pre-ramble - I was trying not to be offensive in talking about this, and hope I didn't bother anyone. For the record, if I were supreme dictator of the project, I might well make the same choices. Attention is limited, DB-flexibility is nowhere near a top-line feature, DB compatibility issues can be complex and subtle, and QA is a limited resource. I don't know that those are the concerns responsible for the current stance, but can totally see good reasons as to why things are they way they are. Anyway, I've been thinking about is an installer mode that treats the DB as Someone Else's Problem - it doesn't try to install, configure or monitor it, instead leaving all config and responsibility for selecting something that works to the administrator. The assumption is that crazy people like me will figure out if things won't work against a given version, and over time the list will be capable of assuming some of that QA responsibility. That leaves the normal path for the bulk of users, and those who want to assume the risk can point at their own clusters where the closest running version is almost always going to be a point-release or three away from whatever Ovirt tests against and configuration is quite different. What I have not done is written any code. I'd like to, but I'm probably several months away from having time. -j From ahino at redhat.com Thu May 3 18:59:13 2018 From: ahino at redhat.com (Ala Hino) Date: Thu, 3 May 2018 21:59:13 +0300 Subject: [ovirt-users] problem to create snapshot In-Reply-To: References: Message-ID: Can you please share more info? - The version you are using - Full log of vdsm and the engine Is the VM running or down while creating the snapshot? On Thu, May 3, 2018 at 8:32 PM, Marcelo Leandro wrote: > Anyone help me? > > 2018-05-02 17:55 GMT-03:00 Marcelo Leandro : > >> Hello , >> >> I am geting error when try do a snapshot: >> >> Error msg in SPM log. >> >> 2018-05-02 17:46:11,235-0300 WARN (tasks/2) [storage.ResourceManager] >> Resource factory failed to create resource '01_img_6e5cce71-3438-4045-9d5 >> 4-607123e0557e.ed7f1c0f-5986-4979-b783-5c465b0854c6'. Canceling request. >> (resourceManager:543) >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", >> line 539, in registerResource >> obj = namespaceObj.factory.createResource(name, lockType) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", >> line 193, in createResource >> lockType) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", >> line 122, in __getResourceCandidatesList >> imgUUID=resourceName) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line >> 213, in getChain >> if srcVol.isLeaf(): >> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line >> 1430, in isLeaf >> return self._manifest.isLeaf() >> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line >> 138, in isLeaf >> return self.getVolType() == sc.type2name(sc.LEAF_VOL) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line >> 134, in getVolType >> self.voltype = self.getMetaParam(sc.VOLTYPE) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line >> 118, in getMetaParam >> meta = self.getMetadata() >> File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", >> line 112, in getMetadata >> md = VolumeMetadata.from_lines(lines) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/volumemetadata.py", >> line 103, in from_lines >> "Missing metadata key: %s: found: %s" % (e, md)) >> MetaDataKeyNotFoundError: Meta Data key not found error: ("Missing >> metadata key: 'DOMAIN': found: {'NONE': '############################# >> ############################################################ >> ############################################################ >> ############################################################ >> ############################################################ >> ############################################################ >> ############################################################ >> ############################################################ >> #####################################################'}",) >> 2018-05-02 17:46:11,286-0300 WARN (tasks/2) >> [storage.ResourceManager.Request] (ResName='01_img_6e5cce71-3438 >> -4045-9d54-607123e0557e.ed7f1c0f-5986-4979-b783-5c465b0854c6', >> ReqID='a3cd9388-977b-45b9-9aa0-e431aeff8750') Tried to cancel a >> processed request (resourceManager:187) >> 2018-05-02 17:46:11,286-0300 ERROR (tasks/2) [storage.TaskManager.Task] >> (Task='ba0766ca-08a1-4d65-a4e9-1e0171939037') Unexpected error (task:875) >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line >> 882, in _run >> return fn(*args, **kargs) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line >> 336, in run >> return self.cmd(*self.argslist, **self.argsdict) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", >> line 79, in wrapper >> return method(self, *args, **kwargs) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1938, >> in createVolume >> with rm.acquireResource(img_ns, imgUUID, rm.EXCLUSIVE): >> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", >> line 1025, in acquireResource >> return _manager.acquireResource(namespace, name, lockType, >> timeout=timeout) >> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", >> line 475, in acquireResource >> raise se.ResourceAcqusitionFailed() >> ResourceAcqusitionFailed: Could not acquire resource. Probably resource >> factory threw an exception.: () >> >> >> Anyone help? >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jzygmont at proofpoint.com Thu May 3 19:07:47 2018 From: jzygmont at proofpoint.com (Justin Zygmont) Date: Thu, 3 May 2018 19:07:47 +0000 Subject: [ovirt-users] adding a host In-Reply-To: References: Message-ID: I can?t seem to do anything to control the host from the engine, when I select it for Maint, the engine log shows: [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] (default task-50) [90ba81ef-21e4-4272-8c59-84786e969ff7] Running command: MaintenanceNumberOfVdssCommand internal: false. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2018-05-03 12:00:37,918-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (default task-50) [90ba81ef-21e4-4272-8c59-84786e969ff7] START, SetVdsStatusVDSCommand(HostName = ovnode102, SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', status='PreparingForMaintenance', nonOperationalReason='NONE', stopSpmFailureLogged='true', maintenanceReason='null'}), log id: 647d5f78 I have only 1 host in the DC, status is Up, the cluster says host count is 2 even though the second host stays Non Operational. I don?t know how to remove it. I just installed and tried to join the DC, this is a fresh installation, the engine was launched through cockpit. Heres what nodectl shows from the host: ovnode102 ~]# nodectl check Status: OK Bootloader ... OK Layer boot entries ... OK Valid boot entries ... OK Mount points ... OK Separate /var ... OK Discard is used ... OK Basic storage ... OK Initialized VG ... OK Initialized Thin Pool ... OK Initialized LVs ... OK Thin storage ... OK Checking available space in thinpool ... OK Checking thinpool auto-extend ... OK vdsmd ... OK Thanks, From: Yanir Quinn [mailto:yquinn at redhat.com] Sent: Thursday, May 3, 2018 1:19 AM To: Justin Zygmont Cc: users at ovirt.org Subject: Re: [ovirt-users] adding a host Did you try switching the host to maintenance mode first ? What is the state of the data center and how many active hosts do you have now? And did you perform any updates recently or just run a fresh installation ? if so , did you run engine-setup before launching engine ? On Thu, May 3, 2018 at 12:47 AM, Justin Zygmont > wrote: I read this page and it doesn?t help since this is a host that can?t be removed, the ?remove? button is dimmed out. This is 4.22 ovirt node, but the host stays in a ?non operational? state. I notice the logs have a lot of errors, for example: the SERVER log: 2018-05-02 14:40:23,847-07 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (ForkJoinPool-1-worker-14) IJ000609: Attempt to return connection twice: org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null]: java.lang.Throwable: STACKTRACE at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:722) at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:611) at org.jboss.jca.core.connectionmanager.pool.AbstractPool.returnConnection(AbstractPool.java:847) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.returnManagedConnection(AbstractConnectionManager.java:725) at org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl.managedConnectionDisconnected(TxConnectionManagerImpl.java:585) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.disconnectManagedConnection(AbstractConnectionManager.java:988) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.reconnectManagedConnection(AbstractConnectionManager.java:974) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:792) at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:138) at org.jboss.as.connector.subsystems.datasources.WildFlyDataSource.getConnection(WildFlyDataSource.java:64) at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:77) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.doExecute(PostgresDbEngineDialect.java:118) [dal.jar:] at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:135) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:105) [dal.jar:] at org.ovirt.engine.core.dao.VmDynamicDaoImpl.getAllRunningForVds(VmDynamicDaoImpl.java:52) [dal.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.isVmRunningOnHost(HostNetworkTopologyPersisterImpl.java:210) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.logChangedDisplayNetwork(HostNetworkTopologyPersisterImpl.java:179) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.auditNetworkCompliance(HostNetworkTopologyPersisterImpl.java:148) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.lambda$persistAndEnforceNetworkCompliance$0(HostNetworkTopologyPersisterImpl.java:100) [vdsbroker.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:202) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInRequired(TransactionSupport.java:137) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:105) [utils.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance(HostNetworkTopologyPersisterImpl.java:93) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance(HostNetworkTopologyPersisterImpl.java:154) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.processRefreshCapabilitiesResponse(VdsManager.java:794) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.handleRefreshCapabilitiesResponse(VdsManager.java:598) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.refreshHostSync(VdsManager.java:567) [vdsbroker.jar:] at org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand.executeCommand(RefreshHostCapabilitiesCommand.java:41) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1133) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1285) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1934) [bll.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103) [utils.jar:] at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1345) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:400) [bll.jar:] at org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:468) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:450) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:393) [bll.jar:] at sun.reflect.GeneratedMethodAccessor158.invoke(Unknown Source) [:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161] at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:78) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:88) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:101) at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) [wildfly-ee-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:40) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53) at org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:264) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:379) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:244) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509) at org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438) at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:609) at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:57) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53) at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198) at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185) at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:81) at org.ovirt.engine.core.bll.interfaces.BackendInternal$$$view4.runInternalAction(Unknown Source) [bll.jar:] at sun.reflect.GeneratedMethodAccessor157.invoke(Unknown Source) [:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161] at org.jboss.weld.util.reflection.Reflections.invokeAndUnwrap(Reflections.java:433) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.EnterpriseBeanProxyMethodHandler.invoke(EnterpriseBeanProxyMethodHandler.java:127) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke(EnterpriseTargetBeanInstance.java:56) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.InjectionPointPropagatingEnterpriseTargetBeanInstance.invoke(InjectionPointPropagatingEnterpriseTargetBeanInstance.java:67) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:100) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.ovirt.engine.core.bll.BackendCommandObjectsHandler$BackendInternal$BackendLocal$2049259618$Proxy$_$$_Weld$EnterpriseProxy$.runInternalAction(Unknown Source) [bll.jar:] at org.ovirt.engine.core.bll.VdsEventListener.refreshHostCapabilities(VdsEventListener.java:598) [bll.jar:] at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext(HostConnectionRefresher.java:47) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext(HostConnectionRefresher.java:30) [vdsbroker.jar:] at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$EventCallable.call(EventPublisher.java:118) [vdsm-jsonrpc-java-client.jar:] at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$EventCallable.call(EventPublisher.java:93) [vdsm-jsonrpc-java-client.jar:] at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) [rt.jar:1.8.0_161] 2018-05-02 14:40:23,851-07 WARN [com.arjuna.ats.arjuna] (ForkJoinPool-1-worker-14) ARJUNA012077: Abort called on already aborted atomic action 0:ffff7f000001:-21bd8800:5ae90c48:10afa And the ENGINE log: 2018-05-02 14:40:23,851-07 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (ForkJoinPool-1-worker-14) [52276df5] transaction rolled back 2018-05-02 14:40:23,851-07 ERROR [org.ovirt.engine.core.vdsbroker.VdsManager] (ForkJoinPool-1-worker-14) [52276df5] Unable to RefreshCapabilities: IllegalStateException: Transaction Local transaction (delegate=TransactionImple < ac, BasicAction: 0:ffff7f000001:-21bd8800:5ae90c48:10afa status: ActionStatus.ABORTED >, owner=Local transaction context for provider JBoss JTA transaction provider) is not active STATUS_ROLLEDBACK 2018-05-02 14:40:23,888-07 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (ForkJoinPool-1-worker-14) [5c511e51] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:23,895-07 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:23,898-07 INFO [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Refresh host capabilities finished. Lock released. Monitoring can run now for host 'ovnode102 from data-center 'Default' 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Command 'org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand' failed: Could not get JDBC Connection; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000457: Unchecked throwable in managedConnectionReconnected() cl=org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null] 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Exception: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000457: Unchecked throwable in managedConnectionReconnected() cl=org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null] at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) [dal.jar:] . . . . 2018-05-02 14:40:23,907-07 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-14) [2a0ec90b] EVENT_ID: HOST_REFRESH_CAPABILITIES_FAILED(607), Failed to refresh the capabilities of host ovnode102. 2018-05-02 14:40:23,907-07 INFO [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Lock freed to object 'EngineLock:{exclusiveLocks='[74dfe965-cb11-495a-96a0-3dae6b3cbd75=VDS, HOST_NETWORK74dfe965-cb11-495a-96a0-3dae6b3cbd75=HOST_NETWORK]', sharedLocks=''}' 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] START, GetHardwareInfoAsyncVDSCommand(HostName = ovnode102, VdsIdAndVdsVDSCommandParametersBase:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', vds='Host[ovnode102,74dfe965-cb11-495a-96a0-3dae6b3cbd75]'}), log id: 300f7345 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] FINISH, GetHardwareInfoAsyncVDSCommand, log id: 300f7345 2018-05-02 14:40:25,802-07 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:25,805-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] START, SetVdsStatusVDSCommand(HostName = ovnode102., SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', status='NonOperational', nonOperationalReason='NETWORK_UNREACHABLE', stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 7611d8d8 2018-05-02 14:40:56,722-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' 2018-05-02 14:40:56,732-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Running command: SyncNetworkProviderCommand internal: true. 2018-05-02 14:40:56,844-07 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-40) [] User admin at internal successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2018-05-02 14:40:57,001-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Lock freed to object 'EngineLock:{exclusiveLocks='[f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'default' is using 0 threads out of 1 and 5 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 8 threads waiting for tasks and 0 tasks in queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineScheduled' is using 1 threads out of 100 and 99 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1 and 0 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5 and 1 tasks are waiting in the queue. From: Yanir Quinn [mailto:yquinn at redhat.com] Sent: Wednesday, May 2, 2018 12:34 AM To: Justin Zygmont > Cc: users at ovirt.org Subject: Re: [ovirt-users] adding a host Hi, What document are you using ? See if you find the needed information here : https://ovirt.org/documentation/admin-guide/chap-Hosts/ For engine related potential errors i recommend also checking the engine.log and in UI check the events section. Regards, Yanir Quinn On Tue, May 1, 2018 at 11:11 PM, Justin Zygmont > wrote: I have tried to add a host to the engine and it just takes forever never working or giving any error message. When I look in the engine?s server.log I see it says the networks are missing. I thought when you install a node and add it to the engine it will add the networks automatically? The docs don?t give much information about this, and I can?t even remove the host through the UI. What steps are required to prepare a node when several vlans are involved? _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsosic at gmail.com Thu May 3 20:26:57 2018 From: jsosic at gmail.com (Jakov Sosic) Date: Thu, 3 May 2018 22:26:57 +0200 Subject: [ovirt-users] Can't add newly reinstalled ovirt node Message-ID: <4bd2fc60-47e9-3238-4fb6-1137e70f3b3f@gmail.com> Hi, after installing 4.2.1.1 oVirt node, and adding it in hosted oVirt engine I get the following error: Ansible host-deploy playbook execution has started on host vhost01. Ansible host-deploy playbook execution has successfully finished on host vhost01. Status of host vhost01 was set to NonOperational. Host vhost01 does not comply with the cluster Lenovo networks, the following networks are missing on host: 'VLAN10' Host vhost01 installation failed. Failed to configure management network on the host. One more interesting thing: Compute => Hosts => vhost02 => Network interfaces is empty... there are no recognized interfaces on this host. Any idea? From randyrue at gmail.com Thu May 3 22:01:22 2018 From: randyrue at gmail.com (Rue, Randy) Date: Thu, 3 May 2018 15:01:22 -0700 Subject: [ovirt-users] newbie questions on networking In-Reply-To: <9115b040-d035-d164-944b-e7091516c559@gmail.com> References: <2ce0b234-5c6f-7dee-3479-21d2fc351f87@gmail.com> <9115b040-d035-d164-944b-e7091516c559@gmail.com> Message-ID: <099eaa94-5bf9-2c4c-51a9-d8132c873831@gmail.com> And Hi Again Again, I still haven't received any copies of the first two emails I sent to this list. Is this list moderated, or do new members require some approval before their posts will be forwarded (but will still make it to the archives)? If so, should I have gotten some reply explaining this when I subscribed? I can ping the VM from the host. Can also SSH from the host to the VM. Oddly, I can SSH from the VM to the host but it's flaky. After some time in the docs it appears the network I want is the "VM Network," and that the ovirtmgmt network is this by default. This option is checked for my ovirtmgmt network. So why can't my VM see the real world? Hoping to hear from you. From jzygmont at proofpoint.com Thu May 3 22:05:50 2018 From: jzygmont at proofpoint.com (Justin Zygmont) Date: Thu, 3 May 2018 22:05:50 +0000 Subject: [ovirt-users] newbie questions on networking In-Reply-To: <099eaa94-5bf9-2c4c-51a9-d8132c873831@gmail.com> References: <2ce0b234-5c6f-7dee-3479-21d2fc351f87@gmail.com> <9115b040-d035-d164-944b-e7091516c559@gmail.com> <099eaa94-5bf9-2c4c-51a9-d8132c873831@gmail.com> Message-ID: I have been seeing your messages, and watching since it could be useful, and I had unanswered questions about networking. -----Original Message----- From: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] On Behalf Of Rue, Randy Sent: Thursday, May 3, 2018 3:01 PM To: users at ovirt.org Subject: Re: [ovirt-users] newbie questions on networking And Hi Again Again, I still haven't received any copies of the first two emails I sent to this list. Is this list moderated, or do new members require some approval before their posts will be forwarded (but will still make it to the archives)? If so, should I have gotten some reply explaining this when I subscribed? I can ping the VM from the host. Can also SSH from the host to the VM. Oddly, I can SSH from the VM to the host but it's flaky. After some time in the docs it appears the network I want is the "VM Network," and that the ovirtmgmt network is this by default. This option is checked for my ovirtmgmt network. So why can't my VM see the real world? Hoping to hear from you. _______________________________________________ Users mailing list Users at ovirt.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ovirt.org_mailman_listinfo_users&d=DwICAg&c=Vxt5e0Osvvt2gflwSlsJ5DmPGcPvTRKLJyp031rXjhg&r=FiPhL0Cl1ymZlnTyAIIL75tE4L0reHcDdD-7wUtUGHA&m=oe_SMTfc2NzVHWN_Lf570I8jGLEI64sm-MkVtoZ2izI&s=uIGujvouyyxPXU6efuYZwD-IYBTI-SzRH1WusselEjw&e= From tacito.ma at hotmail.com Thu May 3 19:57:27 2018 From: tacito.ma at hotmail.com (=?iso-8859-1?Q?T=E1cito_Chaves?=) Date: Thu, 3 May 2018 19:57:27 +0000 Subject: [ovirt-users] Ovirt 3.5 Does not start ovirt-engine after restore Message-ID: Hello guys! Could any of you help me with my manager's backup / restore? I followed the documentation to create my backup and restore on another server but I'm having trouble accessing the panel after the restore is complete. can anybody help me? The steps are listed in this file: http://paste.scsys.co.uk/577382 and just remembering, I'm using version 3.5 and the hostname is the same as the one I'm trying to migrate. The unfortunate message I receive is the following: ovirt-engine: ERROR run: 532 Error: process terminated with status code 1 With this the service does not rise! Thank you! T?cito -------------- next part -------------- An HTML attachment was scrubbed... URL: From tranceworldlogic at gmail.com Fri May 4 07:35:41 2018 From: tranceworldlogic at gmail.com (TranceWorldLogic) Date: Fri, 4 May 2018 13:05:41 +0530 Subject: [ovirt-users] vNIC Network Filter setting Message-ID: Hi, I want to set default vNIC profile "Network Filter" setting as "no-ip-muticast" rather than "vdsm-no-muticast-snooping". Can it possible in ovirt ? If yes, how to do same ? Thanks, ~Rohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Fri May 4 07:43:56 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Fri, 4 May 2018 09:43:56 +0200 Subject: [ovirt-users] Ovirt 3.5 Does not start ovirt-engine after restore In-Reply-To: References: Message-ID: 2018-05-03 21:57 GMT+02:00 T?cito Chaves : > Hello guys! > > Could any of you help me with my manager's backup / restore? > > I followed the documentation to create my backup and restore on another > server but I'm having trouble accessing the panel after the restore is > complete. > > can anybody help me? > > The steps are listed in this file: http://paste.scsys.co.uk/577382 > > and just remembering, I'm using version 3.5 and the hostname is the same > as the one I'm trying to migrate. > oVirt 3.5 reached End Of Life on December 2015 ( https://lists.ovirt.org/pipermail/announce/2015-December/000213.html) I would suggest to take the opportunity for moving to 4.2 which is current supported version. > > > The unfortunate message I receive is the following: ovirt-engine: ERROR > run: 532 Error: process terminated with status code 1 > > With this the service does not rise! > > Thank you! > > T?cito > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA sbonazzo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sven.Achtelik at eps.aero Fri May 4 09:25:21 2018 From: Sven.Achtelik at eps.aero (Sven Achtelik) Date: Fri, 4 May 2018 09:25:21 +0000 Subject: [ovirt-users] Recovering oVirt-Engine with a backup before upgrading to 4.2 In-Reply-To: References: <41e0a4df7d7b4b04824f154982fe953f@eps.aero> <390ad7391c6d4dc1b1d92a762c509e88@eps.aero> <209f6ce7aa8a44e19d1f35d768b3025b@eps.aero> Message-ID: <17d0c786e23c48b5b85ea1352852b09b@eps.aero> Hi All, I'm still failing on this one. I tried setting the locale different, but that doesn't seem to do the trick. When looking at the logfile /var/lib/pgsql/initdb_rh-postgresql95-postgresql.log I can see that the setup process is somehow getting the information and setting the Encoding to UTF8. ---------------------------------------- /var/lib/pgsql/initdb_rh-postgresql95-postgresql.log The files belonging to this database system will be owned by user "postgres". This user must also own the server process. The database cluster will be initialized with locale "en_US.UTF-8". The default database encoding has accordingly been set to "UTF8". The default text search configuration will be set to "english". ---------------------------------------- This leads into the issue that is showing up in /var/lib/pgsql/upgrade_rh-postgresql95-postgresql.log ---------------------------------------- Checking cluster versions ok Checking database user is the install user ok Checking database connection settings ok Checking for prepared transactions ok Checking for reg* system OID user data types ok Checking for contrib/isn with bigint-passing mismatch ok Checking for invalid "line" user columns ok Creating dump of global objects ok Creating dump of database schemas engine ovirt_engine_history postgres template1 ok encodings for database "postgres" do not match: old "SQL_ASCII", new "UTF8" Failure, exiting --------------------------------------- Even changing the locale.conf to en_US without the UTF8 doesn't change anything. Is this information cached somewhere and needs to be reread before this will work ? Any advice on moving forward is appreciated. Thank you, Sven > -----Urspr?ngliche Nachricht----- > Von: Yedidyah Bar David [mailto:didi at redhat.com] > Gesendet: Freitag, 20. April 2018 08:55 > An: Sven Achtelik ; Simone Tiraboschi > > Cc: users at ovirt.org > Betreff: Re: [ovirt-users] Recovering oVirt-Engine with a backup before > upgrading to 4.2 > > On Fri, Apr 13, 2018 at 12:00 PM, Sven Achtelik > wrote: > > Hi All, > > > > I got my stuff up and running again. I works like described in the manual and I > used some extra hardware to jumpstart this. I'm now back on my hosted Engine > 4.1.9 with 3 Hosts running it. The Engine is running on the appliance that is > pulled by the deployment tool and after having everything stable again I > thought of upgrading to 4.2. Thing is that this is just not working with the > appliance because of some issue when upgrading Postgres inside. Looking at > the logs I found this: > > ---------------------------- > > Creating dump of database schemas > > engine > > ovirt_engine_history > > postgres > > template1 > > ok > > > > encodings for database "postgres" do not match: old "SQL_ASCII", new > "UTF8" > > Failure, exiting > > ------------------------------ > > > > After some research I found something here > https://bugzilla.redhat.com/show_bug.cgi?id=1525976, but I'm not sure what > to do with that Information. I used the appliance and didn't do anything manual > in the complete process and I'm wondering why I'm getting this issue now ? > Could someone advice on how to proceed ? > > Looks like: > > https://bugzilla.redhat.com/1528371 > > Which version do you upgrade to? > > If to one that should be covered by above bug, please attach your setup log to it. > Thanks. > > I was on vacation last week and will be in next one too. Adding Simone. > > Best regards, > > > > > Thank you, > > Sven > > > >> -----Urspr?ngliche Nachricht----- > >> Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im > >> Auftrag von Sven Achtelik > >> Gesendet: Mittwoch, 28. M?rz 2018 18:28 > >> An: Yedidyah Bar David > >> Cc: users at ovirt.org > >> Betreff: Re: [ovirt-users] Recovering oVirt-Engine with a backup > >> before upgrading to 4.2 > >> > >> > >> > >> > -----Urspr?ngliche Nachricht----- > >> > Von: Yedidyah Bar David [mailto:didi at redhat.com] > >> > Gesendet: Mittwoch, 28. M?rz 2018 10:06 > >> > An: Sven Achtelik > >> > Cc: users at ovirt.org > >> > Betreff: Re: [ovirt-users] Recovering oVirt-Engine with a backup > >> > before upgrading to 4.2 > >> > > >> > On Tue, Mar 27, 2018 at 9:14 PM, Sven Achtelik > >> > > >> > wrote: > >> > > Hi All, > >> > > > >> > > > >> > > > >> > > I?m still facing issues with my HE engine. Here are the steps > >> > > that I took to end up in this situation: > >> > > > >> > > > >> > > > >> > > - Update Engine from 4.1.7 to 4.1.9 > >> > > > >> > > o That worked as expected > >> > > > >> > > - Automatic Backup of Engine DB in the night > >> > > > >> > > - Upgraded Engine from 4.1.9 to 4.2.1 > >> > > > >> > > o That worked fine > >> > > > >> > > - Noticed Issues with the HA support for HE > >> > > > >> > > o Cause was not having the latest ovirt-ha agent/broker version on hosts > >> > > > >> > > - After updating the first host with the latest packages for the > >> > > Agent/Broker engine was started twice > >> > > > >> > > o As a result the Engine VM Disk was corrupted and there is no Backup > of > >> > > the Disk > >> > > > >> > > o There is also no Backup of the Engine DB with version 4.2 > >> > > > >> > > - VM disk was repaired with fsck.ext4, but DB is corrupt > >> > > > >> > > o Can?t restore the Engine DB because the Backup DB from Engine V 4.1 > >> > > > >> > > - Rolled back all changes on Engine VM to 4.1.9 and imported Backup > >> > > > >> > > o Checked for HA VMs to set as disabled and started the Engine > >> > > > >> > > - Login is fine but the Engine is having trouble picking up and > >> > > information from the Hosts > >> > > > >> > > o No information on running VMs or hosts status > >> > > > >> > > - Final Situation > >> > > > >> > > o 2 Hosts have VMs still running and I can?t stop those > >> > > > >> > > o I still have the image of my corrupted Engine VM (v4.2) > >> > > > >> > > > >> > > > >> > > Since there were no major changes after upgrading from 4.1 to > >> > > 4.2, would it be possible to manually restore the 4.1 DB to the > >> > > 4.2 Engine VM to this up and running again or are there > >> > > modifications made to the DB on upgrading that are relevant for this ? > >> > > >> > engine-backup requires restoring to the same version used to take > >> > the backup, with a single exception - on 4.0, it can restore 3.6. > >> > > >> > It's very easy to patch it to allow also 4.1->4.2, search inside it > >> > for "VALID_BACKUP_RESTORE_PAIRS". However, I do not think anyone > >> > ever tested this, so no idea might break. In 3.6->4.0 days, we did > >> > have to fix a few other things, notably apache httpd and iptables- > >firewalld: > >> > > >> > https://bugzilla.redhat.com/show_bug.cgi?id=1318580 > >> > > >> > > All my work on rolling back to 4.1.9 with the DB restore failed > >> > > as the Engine is not capable of picking up information from the hosts. > >> > > >> > No idea why, but not sure it's related to your restore flow. > >> > > >> > > Lessons learned is to always make a copy/snapshot of the engine > >> > > VM disk before upgrading anything. > >> > > >> > If it's a hosted-engine, this isn't supported - see my reply on the > >> > list ~ 1 hour ago... > >> > > >> > > What are my options on getting > >> > > back to a working environment ? Any help or hint is greatly appreciated. > >> > > >> > Restore again with either methods - what you tried, or patching > >> > engine- backup and restore directly into 4.2 - and if the engine > >> > fails to talk to the hosts, try to debug/fix this. > >> > > >> > If you suspect corruption more severe that just the db, you can > >> > install a fresh engine machine from scratch and restore to it. If > >> > it's a hosted-engine, you'll need to deploy hosted-engine from > >> > scratch, check docs about hosted-engine backup/restore. > >> > >> I read through those documents and it seems that I would need an > >> extra Host/Hardware which I don't have. > >> https://ovirt.org/documentation/self- > >> hosted/chap-Backing_up_and_Restoring_an_EL-Based_Self- > >> Hosted_Environment/ > >> > >> So how would I be able to get a new setup working when I would like > >> to use the Engine-VM-Image ? At this point it sounds like I would > >> have to manually reinstall the machine that is left over and running. I'm lost > at this point. > >> > > >> > Best regards, > >> > -- > >> > Didi > >> _______________________________________________ > >> Users mailing list > >> Users at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/users > > > > -- > Didi From dholler at redhat.com Fri May 4 09:44:21 2018 From: dholler at redhat.com (Dominik Holler) Date: Fri, 4 May 2018 11:44:21 +0200 Subject: [ovirt-users] newbie questions on networking In-Reply-To: <9115b040-d035-d164-944b-e7091516c559@gmail.com> References: <2ce0b234-5c6f-7dee-3479-21d2fc351f87@gmail.com> <9115b040-d035-d164-944b-e7091516c559@gmail.com> Message-ID: <20180504114421.7c9cad1d@t460p> On Thu, 3 May 2018 09:30:17 -0700 "Rue, Randy" wrote: > Hi Again, > > I'm not sure if my first post yesterday went through, I can see it in > the list archives but I didn't receive a copy and I've confirmed my > list settings include me getting a copy of my own posts. In any case, > nobody has replied and unless I'm the only guy that needs my VMs to > talk to the rest of the world I assume someone else knows how to fix > this. > > I've read and re-read the Quick Start Guide, Installation Guide and > Administration Guide even though they appear to describe an earlier > version. If I've overlooked the answer and this is an RTFM issue, > feel free to tell me so but I'd be grateful if you'd also tell me > exactly which part of the FM to read. > > Again, my VM is getting an IP address and nameserver settings from > the DHCP service running on the server room subnet the oVirt host > sits in. This looks like the oVirt setup is fine. > From the Vm, I can ping the static IP of the host the vm is > on, but not anything else on the server room subnet including the > other hosts or the subnet's gateway. The "route" command sits for > about 10 seconds before completing but eventually shows two rows, one Maybe the route command tries to resolve hostnames using an unreachable DNS server? > for default with the correct local gateway and one for the local > subnet. All appears to be well on the VM, the problem appears to be > the host is not passing traffic. > Maybe the host passes the traffic, but the network equipment outside the host prevents IP spoofing of the host? Can you check if the VM traffic is pushed out of the host's interface, e.g. by tcpdump on the hosts outgoing interface? To check if the problem is the network between the hosts, you can check connectivity between two VMs on two different host connected via an external network of the ovirt-provider-ovn. This way the VM traffic will be tunneled in the physical network. > The dialogue for the interface on the host shows some logos on the > ovirtmgmt network that's assigned to it, including a green "VM" tile. > Is this the "outside" role for commodity connections to a VM? > > I've also spent some time rooting around different parts of the admin > interface and found some settings under the ovirtmgmt network's vNIC > Profiles for the "Network Filter." Tried changing that to "allow > IPv4" and then to "No Network Filter" with no change. > > I hope to hear from you soon. > > randy in Seattle > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From dholler at redhat.com Fri May 4 09:46:13 2018 From: dholler at redhat.com (Dominik Holler) Date: Fri, 4 May 2018 11:46:13 +0200 Subject: [ovirt-users] Can't add newly reinstalled ovirt node In-Reply-To: <4bd2fc60-47e9-3238-4fb6-1137e70f3b3f@gmail.com> References: <4bd2fc60-47e9-3238-4fb6-1137e70f3b3f@gmail.com> Message-ID: <20180504114613.24d1b6e7@t460p> On Thu, 3 May 2018 22:26:57 +0200 Jakov Sosic wrote: > Hi, > > after installing 4.2.1.1 oVirt node, and adding it in hosted oVirt > engine I get the following error: > > Ansible host-deploy playbook execution has started on host vhost01. > Ansible host-deploy playbook execution has successfully finished on > host vhost01. > Status of host vhost01 was set to NonOperational. > Host vhost01 does not comply with the cluster Lenovo networks, the > following networks are missing on host: 'VLAN10' > Host vhost01 installation failed. Failed to configure management > network on the host. > Can you please share links to the vdsm.log and supervdsm.log of the host? > One more interesting thing: > > Compute => Hosts => vhost02 => Network interfaces > > is empty... there are no recognized interfaces on this host. > > > Any idea? > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From dholler at redhat.com Fri May 4 09:49:16 2018 From: dholler at redhat.com (Dominik Holler) Date: Fri, 4 May 2018 11:49:16 +0200 Subject: [ovirt-users] vNIC Network Filter setting In-Reply-To: References: Message-ID: <20180504114916.35add628@t460p> On Fri, 4 May 2018 13:05:41 +0530 TranceWorldLogic wrote: > Hi, > > I want to set default vNIC profile "Network Filter" setting as > "no-ip-muticast" rather than "vdsm-no-muticast-snooping". > > Can it possible in ovirt ? No, I am not aware of. > If yes, how to do same ? > If you think this would be useful, please open a bug to discuss this. > > Thanks, > ~Rohit From matthias.leopold at meduniwien.ac.at Fri May 4 10:36:59 2018 From: matthias.leopold at meduniwien.ac.at (Matthias Leopold) Date: Fri, 4 May 2018 12:36:59 +0200 Subject: [ovirt-users] managing local users in 4.2 ? Message-ID: <95323c64-5dbf-0e46-f6da-b78113aaf0af@meduniwien.ac.at> Hi, i tried to create a local user in oVirt 4.2 with "ovirt-aaa-jdbc-tool user add" (like i did in oVirt 4.1.9). the command worked ok, but the created user wasn't visible in the web gui. i then used the "add" button in admin portal to add the already existing user and after that the user was visible. i didn't have to do that in 4.1.9, the "add" button was already there the, but i didn't know what to do with it. how did managing local users change in 4.2? thx matthias From callum at well.ox.ac.uk Fri May 4 11:09:41 2018 From: callum at well.ox.ac.uk (Callum Smith) Date: Fri, 4 May 2018 11:09:41 +0000 Subject: [ovirt-users] Re-attaching ISOs and moving ISOs storage In-Reply-To: References: <288C5B8E-D9BC-48E5-A130-EA2F9FA9E942@well.ox.ac.uk> Message-ID: Is there any sensible way to either clean up the existing ISOs storage or re-attach it? I'm struggling to even export VMs and migrate them elsewhere with this and need to recover them asap. Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk On 2 May 2018, at 15:09, Callum Smith > wrote: Attached, thank you for looking into this https://HOSTNAME/ovirt-engine/api/v4/storagedomains/f5914df0-f46c-4cc0-b666-c929aa0225ae VMISOs 11770357874688 false 0 5 false ok false
backoffice01.cluster
/vm-iso nfs
v1 false false iso 38654705664 10 false
https://HOSTNAME/ovirt-engine/api/v4/datacenters/5a54bf81-0228-02bc-0358-000000000304/storagedomains tegile-virtman-backup 17519171600384 false 0 5 false ok false maintenance
192.168.64.248
auto /export/virtman/backup nfs
v1 false false export 8589934592 10 false
VMStorage 11770357874688 false 118111600640 5 false ok true active
backoffice01.cluster
/vm-storage2 nfs
v4 false false data 38654705664 10 false
tegile-virtman 2190433320960 false 226559524864 5 false ok false active
192.168.64.248
auto /export/virtman/VirtualServerShare_1 nfs
v4 false false data 8589934592 10 false
VMISOs 11770357874688 false 0 5 false ok false maintenance
backoffice01.cluster
/vm-iso nfs
v1 false false iso 38654705664 10 false
Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk On 2 May 2018, at 14:46, Fred Rolland > wrote: Can you share the REST API data of the Storage domain and Data Center? Here an example of the URLs, you will need to replace with correct ids. http://MY-SERVER/ovirt-engine/api/v4/storagedomains/13461356-f6f7-4a58-9897-2fac61ff40af http://MY-SERVER/ovirt-engine/api/v4/datacenters/5a5df553-022d-036d-01e8-000000000071/storagedomains On Wed, May 2, 2018 at 12:53 PM, Callum Smith > wrote: This is on 4.2.0.2-1, I've linked the main logs to dropbox simply because they're big, full of noise right now. https://www.dropbox.com/s/f8q3m5amro2a1b2/engine.log?dl=0 https://www.dropbox.com/s/uods85jk65halo3/vdsm.log?dl=0 Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk On 2 May 2018, at 10:43, Fred Rolland > wrote: Which version are you using? Can you provide the whole log? For some reason, it looks like the Vdsm thinks that the Storage Domain is not part of the pool. On Wed, May 2, 2018 at 11:20 AM, Callum Smith > wrote: State is maintenance for the ISOs storage. I've extracted what is hopefully the relevant bits of the log. VDSM.log (SPM) 2018-05-02 09:16:03,455+0100 INFO (ioprocess communication (179084)) [IOProcess] Starting ioprocess (__init__:447) 2018-05-02 09:16:03,456+0100 INFO (ioprocess communication (179091)) [IOProcess] Starting ioprocess (__init__:447) 2018-05-02 09:16:03,461+0100 INFO (jsonrpc/0) [vdsm.api] FINISH activateStorageDomain error=Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' from=::ffff:192.168.64.254,58968, flow_id=93433989-8e26-48a9-bd3a-2ab95f296c08, task_id=7f21f911-348f-45a3-b79c-e3cb11642035 (api:50) 2018-05-02 09:16:03,461+0100 ERROR (jsonrpc/0) [storage.TaskManager.Task] (Task='7f21f911-348f-45a3-b79c-e3cb11642035') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "", line 2, in activateStorageDomain File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1256, in activateStorageDomain pool.activateSD(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1130, in activateSD self.validateAttachedDomain(dom) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 557, in validateAttachedDomain raise se.StorageDomainNotInPool(self.spUUID, dom.sdUUID) StorageDomainNotInPool: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' 2018-05-02 09:16:03,461+0100 INFO (jsonrpc/0) [storage.TaskManager.Task] (Task='7f21f911-348f-45a3-b79c-e3cb11642035') aborting: Task is aborted: "Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304'" - code 353 (task:1181) 2018-05-02 09:16:03,462+0100 ERROR (jsonrpc/0) [storage.Dispatcher] FINISH activateStorageDomain error=Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' (dispatcher:82) engine.log 2018-05-02 09:16:02,326+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (default task-20) [93433989-8e26-48a9-bd3a-2ab95f296c08] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f5914df0-f46c-4cc0-b666-c929aa0225ae=STORAGE]', sharedLocks=''}' 2018-05-02 09:16:02,376+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Running command: ActivateStorageDomainCommand internal: false. Entities affected : ID: f5914df0-f46c-4cc0-b666-c929aa0225ae Type: StorageAction group MANIPULATE_STORAGE_DOMA IN with role type ADMIN 2018-05-02 09:16:02,385+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Lock freed to object 'EngineLock:{exclusiveLocks='[f5914df0-f46c-4cc0-b666-c929aa0225ae=STORAGE]', sharedLocks=''}' 2018-05-02 09:16:02,385+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] ActivateStorage Domain. Before Connect all hosts to pool. Time: Wed May 02 09:16:02 BST 2018 2018-05-02 09:16:02,407+01 INFO [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand] (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] Running command: ConnectStorageToVdsCommand internal: true. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN 2018-05-02 09:16:02,421+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] START, ConnectStorageServerVDSCommand(HostName = virtA003, StorageServerConnectionManagementVDSParameters:{hostId='fe2861fc-2b47-4807-b054-470198eda473', storagePoolId='00000000-0000-0000-0000-000000 000000', storageType='NFS', connectionList='[StorageServerConnections:{id='da392861-aedc-4f1e-97f4-6919fb01f1e9', connection='backoffice01.cluster:/vm-iso', iqn='null', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 23ce648f 2018-05-02 09:16:02,446+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] FINISH, ConnectStorageServerVDSCommand, return: {da392861-aedc-4f1e-97f4-6919fb01f1e9=0}, log id: 23ce648f 2018-05-02 09:16:02,450+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] START, ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSCommandParameters:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', ignoreFailoverLimit='false', stor ageDomainId='f5914df0-f46c-4cc0-b666-c929aa0225ae'}), log id: 5c864594 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command ActivateStorageDomainVDS failed: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a5 4bf81-0228-02bc-0358-000000000304' 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Command 'ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSCommandParameters:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', ignoreFailoverLimit='false', st orageDomainId='f5914df0-f46c-4cc0-b666-c929aa0225ae'})' execution failed: IRSGenericException: IRSErrorException: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' 2018-05-02 09:16:02,635+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] FINISH, ActivateStorageDomainVDSCommand, log id: 5c864594 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Command 'org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.irsbroker.IrsOperationFailedNoFailove rException: IRSGenericException: IRSErrorException: Storage domain not in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a54bf81-0228-02bc-0358-000000000304' (Failed with error StorageDomainNotInPool and code 353) 2018-05-02 09:16:02,636+01 INFO [org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] Command [id=22b0f3c1-9a09-4e26-8096-d83465c8f4ee]: Compensating CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: EntityStatus Snapshot:{id='StoragePoolIsoMapId:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', storageId='f5914df0-f46c-4cc0-b666-c929aa0225ae'}', status='Maintenance'}. 2018-05-02 09:16:02,660+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33455) [93433989-8e26-48a9-bd3a-2ab95f296c08] EVENT_ID: USER_ACTIVATE_STORAGE_DOMAIN_FAILED(967), Failed to activate Storage Domain VMISOs (Data Center Default) by admin at internal-authz Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk On 2 May 2018, at 08:44, Fred Rolland > wrote: Hi, Can you provide logs from engine and Vdsm(SPM)? What is the state now? Thanks, Fred On Tue, May 1, 2018 at 4:11 PM, Callum Smith > wrote: Dear All, It appears that clicking "detach" on the ISO storage domain is a really bad idea. This has gotten half way through the procedure and now can't be recovered from. Is there any advice for re-attaching the ISO storage domain manually? An NFS mount didn't add it back to the "pool" unfortunately. On a separate note, is it possible to migrate this storage to a new location? And if so how. Regards, Callum -- Callum Smith Research Computing Core Wellcome Trust Centre for Human Genetics University of Oxford e. callum at well.ox.ac.uk _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From amusil at redhat.com Fri May 4 11:31:58 2018 From: amusil at redhat.com (Ales Musil) Date: Fri, 4 May 2018 13:31:58 +0200 Subject: [ovirt-users] Can't add newly reinstalled ovirt node In-Reply-To: <20180504114613.24d1b6e7@t460p> References: <4bd2fc60-47e9-3238-4fb6-1137e70f3b3f@gmail.com> <20180504114613.24d1b6e7@t460p> Message-ID: On Fri, May 4, 2018 at 11:46 AM, Dominik Holler wrote: > On Thu, 3 May 2018 22:26:57 +0200 > Jakov Sosic wrote: > > > Hi, > > > > after installing 4.2.1.1 oVirt node, and adding it in hosted oVirt > > engine I get the following error: > > > > Ansible host-deploy playbook execution has started on host vhost01. > > Ansible host-deploy playbook execution has successfully finished on > > host vhost01. > > Status of host vhost01 was set to NonOperational. > > Host vhost01 does not comply with the cluster Lenovo networks, the > > following networks are missing on host: 'VLAN10' > > Host vhost01 installation failed. Failed to configure management > > network on the host. > > > > Can you please share links to the vdsm.log and supervdsm.log of the > host? > Seems like this one to me https://bugzilla.redhat.com/show_bug.cgi?id=1570388 Is the VLAN10 marked as required in the cluster? > > One more interesting thing: > > > > Compute => Hosts => vhost02 => Network interfaces > > > > is empty... there are no recognized interfaces on this host. > > > > > > Any idea? > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- ALES MUSIL INTERN - rhv network Red Hat EMEA amusil at redhat.com IM: amusil -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsosic at gmail.com Fri May 4 12:40:45 2018 From: jsosic at gmail.com (Jakov Sosic) Date: Fri, 4 May 2018 14:40:45 +0200 Subject: [ovirt-users] Can't add newly reinstalled ovirt node In-Reply-To: References: <4bd2fc60-47e9-3238-4fb6-1137e70f3b3f@gmail.com> <20180504114613.24d1b6e7@t460p> Message-ID: <6a6f0721-9892-1f36-e487-38d66b4b7eb6@gmail.com> On 05/04/2018 01:31 PM, Ales Musil wrote: > > > On Fri, May 4, 2018 at 11:46 AM, Dominik Holler > wrote: > > On Thu, 3 May 2018 22:26:57 +0200 > Jakov Sosic > wrote: > > > Hi, > > > > after installing 4.2.1.1 oVirt node, and adding it in hosted oVirt > > engine I get the following error: > > > > Ansible host-deploy playbook execution has started on host vhost01. > > Ansible host-deploy playbook execution has successfully finished on > > host vhost01. > > Status of host vhost01 was set to NonOperational. > > Host vhost01 does not comply with the cluster Lenovo networks, the > > following networks are missing on host: 'VLAN10' > > Host vhost01 installation failed. Failed to configure management > > network on the host. > > > > Can you please share links to the vdsm.log and supervdsm.log of the > host? > > Seems like this one to me > https://bugzilla.redhat.com/show_bug.cgi?id=1570388 Yes, it was a required network... Now, what I did, after another clean install was: nmcli device set eno1 managed no nmcli device set eno2 managed no nmcli device set eno3 managed no nmcli device set eno4 managed no and after this, adding host passed. I did remove all the required networks too, so I'm wondering if that was the cause... I can try to add back required network, and reinstall host with using the `nmcli` trick. From sbonazzo at redhat.com Fri May 4 13:00:13 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Fri, 4 May 2018 15:00:13 +0200 Subject: [ovirt-users] [ANN] oVirt 4.2.3 is now generally available Message-ID: The oVirt Project is pleased to announce the general availability of oVirt 4.2.3 as of May 4th, 2018 This update is the third release in a series of stabilization updates to the 4.2 series. This release is available now for: * Red Hat Enterprise Linux 7.5 or later * CentOS Linux (or similar) 7.4 or later This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.5 or later * CentOS Linux (or similar) 7.4 or later * oVirt Node 4.2 See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed. Notes: - oVirt Appliance will be avialable in a few hours - oVirt Node will be avialable in a few hours [2] Additional Resources: * Read more about the oVirt 4.2.3 release highlights: http://www.ovirt.org/release/4.2.3/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4.2.3/ [2] http://resources.ovirt.org/pub/ovirt-4.2/iso/ -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA sbonazzo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias.leopold at meduniwien.ac.at Fri May 4 13:59:08 2018 From: matthias.leopold at meduniwien.ac.at (Matthias Leopold) Date: Fri, 4 May 2018 15:59:08 +0200 Subject: [ovirt-users] managing local users in 4.2 ? In-Reply-To: <95323c64-5dbf-0e46-f6da-b78113aaf0af@meduniwien.ac.at> References: <95323c64-5dbf-0e46-f6da-b78113aaf0af@meduniwien.ac.at> Message-ID: <41286b17-6f9d-12fc-e197-0b468da3b9a5@meduniwien.ac.at> Am 2018-05-04 um 12:36 schrieb Matthias Leopold: > Hi, > > i tried to create a local user in oVirt 4.2 with "ovirt-aaa-jdbc-tool > user add" (like i did in oVirt 4.1.9). the command worked ok, but the > created user wasn't visible in the web gui. i then used the "add" button > in admin portal to add the already existing user and after that the user > was visible. i didn't have to do that in 4.1.9, the "add" button was > already there the, but i didn't know what to do with it. how did > managing local users change in 4.2? > ok, i got it: only after setting actual permissions for a user he/she appears automatically in Admin Portal - Administration - Users. this was different in 4.1.9 IIRC matthias From ovirt at qip.ru Fri May 4 13:20:03 2018 From: ovirt at qip.ru (Vadim) Date: Fri, 04 May 2018 16:20:03 +0300 Subject: [ovirt-users] Can't get hosted-engine console in cli Message-ID: <3e3b52e50f720594517bf734d45761b04f10030f@mail.qip.ru> Hi, Users I installed on clean Centos7.4 ovirt-4.2.2.6-1 with HE on domain iscsi using cockpit and found that 1. can't get console in cli # hosted-engine --console The engine VM is running on this host Connected to domain HostedEngine Escape character is ^] error: internal error: cannot find character device from gui console works fine. may this is because vm.conf in 4.2.2 display=qxl ? 2. can't import vm from ova errors from engine.log 2018-05-03 12:19:29,112+03 ERROR [org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] (EE- ManagedThreadFactory-engine-Thread-11299) [8cc80ae5-cccf-4554-b52d-4892548eea8c] Command 'org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand' failed: For input string: "" 2018-05-03 12:19:29,112+03 ERROR [org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] (EE- ManagedThreadFactory-engine-Thread-11299) [8cc80ae5-cccf-4554-b52d-4892548eea8c] Exception: java.lang.NumberFormatException: For input string: "" 2018-05-03 12:19:29,119+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE- ManagedThreadFactory-engine-Thread-11299) [8cc80ae5-cccf-4554-b52d-4892548eea8c] EVENT_ID: IMPORTEXPORT_IMPORT_VM_FAILED(1,153), Failed to import Vm VM7.2.0-x86_64 to Data Center Default, Cluster Default on ovirt-4.1.9.1-1 import works fine Can you help me to solve these problems. Thanks, Vadim From stirabos at redhat.com Fri May 4 15:54:40 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Fri, 4 May 2018 17:54:40 +0200 Subject: [ovirt-users] Can't get hosted-engine console in cli In-Reply-To: <3e3b52e50f720594517bf734d45761b04f10030f@mail.qip.ru> References: <3e3b52e50f720594517bf734d45761b04f10030f@mail.qip.ru> Message-ID: On Fri, May 4, 2018 at 3:20 PM, Vadim wrote: > Hi, Users > > I installed on clean Centos7.4 ovirt-4.2.2.6-1 with HE on domain iscsi > using cockpit and found that > > 1. can't get console in cli > > # hosted-engine --console > The engine VM is running on this host > Connected to domain HostedEngine > Escape character is ^] > error: internal error: cannot find character device > > from gui console works fine. > may this is because vm.conf in 4.2.2 display=qxl ? > The hosted-engine VM currently misses the console device and the engine refuses to add it. We have an open bug tracking that: https://bugzilla.redhat.com/show_bug.cgi?id=1561964 In the mean time you can still use hosted-engine --add-console-password to temporary set a vnc password and open a VNC console to the engine VM (hosted-engine --add-console-password will also print out the requested parameters). > > 2. can't import vm from ova > errors from engine.log > > 2018-05-03 12:19:29,112+03 ERROR [org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] > (EE- > > ManagedThreadFactory-engine-Thread-11299) [8cc80ae5-cccf-4554-b52d-4892548eea8c] > Command > > 'org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand' failed: > For input string: "" > 2018-05-03 12:19:29,112+03 ERROR [org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] > (EE- > > ManagedThreadFactory-engine-Thread-11299) [8cc80ae5-cccf-4554-b52d-4892548eea8c] > Exception: > > java.lang.NumberFormatException: For input string: "" > 2018-05-03 12:19:29,119+03 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE- > > ManagedThreadFactory-engine-Thread-11299) [8cc80ae5-cccf-4554-b52d-4892548eea8c] > EVENT_ID: > > IMPORTEXPORT_IMPORT_VM_FAILED(1,153), Failed to import Vm VM7.2.0-x86_64 > to Data Center Default, Cluster > > Default > > on ovirt-4.1.9.1-1 import works fine > > > Can you help me to solve these problems. > > Thanks, > Vadim > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matthew.Young3 at bp.com Thu May 3 21:01:48 2018 From: Matthew.Young3 at bp.com (Young, Matthew (Numerical Algorithms Group)) Date: Thu, 3 May 2018 21:01:48 +0000 Subject: [ovirt-users] ansible failure during host install In-Reply-To: References: Message-ID: Ovirt-engine-metrics v1.1.3.4 /etc/ovirt-engine-metrics/config.yml: ovirt_env_name: data-science-sandbox fluentd_elasticsearch_host: ovirt-metrics.datasci.bp.com ovirt_elasticsearch_mounted_storage_path: /data I have not gotten the openshift based metrics store running. It is consistently timing out while performing ?TASK [openshift_hosted : Configure a passthrough route for docker-registry].? Failure summary: 1. Hosts: localhost Play: Create Hosted Resources - registry Task: Configure a passthrough route for docker-registry Message: {u'cmd': u'/usr/bin/oc replace -f /tmp/docker-registry-DtGZqz --force -n default', u'returncode': 1, u'results': {}, u'stderr': u'error: timed out waiting for the condition\n', u'stdout': u'route "docker-registry" deleted\n'} Matt Young From: Shirly Radco Sent: Thursday, April 12, 2018 4:04 PM To: Young, Matthew (Numerical Algorithms Group) Cc: users Subject: Re: [ovirt-users] ansible failure during host install Hi, What version of ovirt-engine-metrics are you using? Did you already setup the OpenShift based metrics store to connect to? Have you configured /etc/ovirt-engine-metrics/config.yml? -- SHIRLY RADCO BI SENIOR SOFTWARE ENGINEER Red Hat Israel TRIED. TESTED. TRUSTED. On Thu, Apr 12, 2018, 21:35 Young, Matthew (Numerical Algorithms Group) > wrote: Hello all, I have been working on adding a new host to an oVirt cluster. The installation is failing in the ansible portion, specifically installing the ansible elasticsearch ca cert. I have no experience with ansible. Does anyone know what is causing this issue? Or how to create this ca cert? **** from ansible log**** ? 2018-04-12 09:32:18,114 p=56616 u=ovirt | TASK [oVirt.ovirt-fluentd/fluentd-setup : Create fluentd.conf] ***************** 2018-04-12 09:32:18,730 p=56616 u=ovirt | ok: [10.222.10.156] => { "changed": false, "checksum": "c9f88b60cd12ab8e3f3ffce1ae07654c89b06ef6", "gid": 0, "group": "root", "mode": "0640", "owner": "root", "path": "/etc/fluentd/fluent.conf", "secontext": "system_u:object_r:etc_t:s0", "size": 58, "state": "file", "uid": 0 } 2018-04-12 09:32:18,739 p=56616 u=ovirt | TASK [oVirt.ovirt-fluentd/fluentd-setup : Install fluentd certificate] ********* 2018-04-12 09:32:18,759 p=56616 u=ovirt | skipping: [10.222.10.156] => { "changed": false, "skip_reason": "Conditional result was False" } 2018-04-12 09:32:18,768 p=56616 u=ovirt | TASK [oVirt.ovirt-fluentd/fluentd-setup : Install fluentd elasticsearch CA certificate] *** 2018-04-12 09:32:18,810 p=56616 u=ovirt | fatal: [10.222.10.156]: FAILED! => { "changed": false } MSG: src (or content) is required 2018-04-12 09:32:18,812 p=56616 u=ovirt | PLAY RECAP ********************************************************************* 2018-04-12 09:32:18,812 p=56616 u=ovirt | 10.222.10.156 : ok=33 changed=4 unreachable=0 failed=1 ************ _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matthew.Young3 at bp.com Thu May 3 22:20:41 2018 From: Matthew.Young3 at bp.com (Young, Matthew (Numerical Algorithms Group)) Date: Thu, 3 May 2018 22:20:41 +0000 Subject: [ovirt-users] ansible failure during host install In-Reply-To: References: Message-ID: I?ve gotten past that error with ?oc delete route/docker-registry? before restart the playbook Matt From: Young, Matthew (Numerical Algorithms Group) Sent: Thursday, May 3, 2018 4:02 PM To: 'Shirly Radco' Cc: users Subject: RE: [ovirt-users] ansible failure during host install Ovirt-engine-metrics v1.1.3.4 /etc/ovirt-engine-metrics/config.yml: ovirt_env_name: data-science-sandbox fluentd_elasticsearch_host: ovirt-metrics.datasci.bp.com ovirt_elasticsearch_mounted_storage_path: /data I have not gotten the openshift based metrics store running. It is consistently timing out while performing ?TASK [openshift_hosted : Configure a passthrough route for docker-registry].? Failure summary: 1. Hosts: localhost Play: Create Hosted Resources - registry Task: Configure a passthrough route for docker-registry Message: {u'cmd': u'/usr/bin/oc replace -f /tmp/docker-registry-DtGZqz --force -n default', u'returncode': 1, u'results': {}, u'stderr': u'error: timed out waiting for the condition\n', u'stdout': u'route "docker-registry" deleted\n'} Matt Young From: Shirly Radco > Sent: Thursday, April 12, 2018 4:04 PM To: Young, Matthew (Numerical Algorithms Group) > Cc: users > Subject: Re: [ovirt-users] ansible failure during host install Hi, What version of ovirt-engine-metrics are you using? Did you already setup the OpenShift based metrics store to connect to? Have you configured /etc/ovirt-engine-metrics/config.yml? -- SHIRLY RADCO BI SENIOR SOFTWARE ENGINEER Red Hat Israel TRIED. TESTED. TRUSTED. On Thu, Apr 12, 2018, 21:35 Young, Matthew (Numerical Algorithms Group) > wrote: Hello all, I have been working on adding a new host to an oVirt cluster. The installation is failing in the ansible portion, specifically installing the ansible elasticsearch ca cert. I have no experience with ansible. Does anyone know what is causing this issue? Or how to create this ca cert? **** from ansible log**** ? 2018-04-12 09:32:18,114 p=56616 u=ovirt | TASK [oVirt.ovirt-fluentd/fluentd-setup : Create fluentd.conf] ***************** 2018-04-12 09:32:18,730 p=56616 u=ovirt | ok: [10.222.10.156] => { "changed": false, "checksum": "c9f88b60cd12ab8e3f3ffce1ae07654c89b06ef6", "gid": 0, "group": "root", "mode": "0640", "owner": "root", "path": "/etc/fluentd/fluent.conf", "secontext": "system_u:object_r:etc_t:s0", "size": 58, "state": "file", "uid": 0 } 2018-04-12 09:32:18,739 p=56616 u=ovirt | TASK [oVirt.ovirt-fluentd/fluentd-setup : Install fluentd certificate] ********* 2018-04-12 09:32:18,759 p=56616 u=ovirt | skipping: [10.222.10.156] => { "changed": false, "skip_reason": "Conditional result was False" } 2018-04-12 09:32:18,768 p=56616 u=ovirt | TASK [oVirt.ovirt-fluentd/fluentd-setup : Install fluentd elasticsearch CA certificate] *** 2018-04-12 09:32:18,810 p=56616 u=ovirt | fatal: [10.222.10.156]: FAILED! => { "changed": false } MSG: src (or content) is required 2018-04-12 09:32:18,812 p=56616 u=ovirt | PLAY RECAP ********************************************************************* 2018-04-12 09:32:18,812 p=56616 u=ovirt | 10.222.10.156 : ok=33 changed=4 unreachable=0 failed=1 ************ _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From anastasiya.ruzhanskaya at frtk.ru Sat May 5 10:07:23 2018 From: anastasiya.ruzhanskaya at frtk.ru (Anastasiya Ruzhanskaya) Date: Sat, 5 May 2018 13:07:23 +0300 Subject: [ovirt-users] oVirt messages from engine to vdsm Message-ID: Hello everyone! Currently I want to determine what information is included in messages passing from oVirt engine to VDSM on ovirt-node. I made up a really simple configuration with one VM representing engine, another - node, a managed to successfully launch a single VM on this node. However, I have chosen to configure everything automatically. Currently traffic is encrypted with default certificates. So, there are three options for me and no one of them really works. 1) Find the format of messages ( what the fields are, session id for example) in docs, but I didn't manage to find it; 2) Use wireshark to decrypt the traffic and the apply maybe a json -dissector to the decrypted data. I have tried many solutions ( thanks god I have rsa private and public keys but there is another session key which is generated every time engine starts to communicate with vdsm, which I cannot get with the help of sslkeylog file or ld_preload technology. Maybe someone knows the exact methodology how to do this correctly? 3) Turn off ssl in oVirt. It is simple to do that for vdsm, but for engine, according to answers on oVirt site, I should do 2 requests to the database. I was really surprised that psql was not installed by oVirt on my system. How did it then created a default database? ( I have chosen to create all locally and with default configurations). I mean these two commands : https://www.ovirt.org/develop/developer-guide/vdsm/connecting-development-vdsm-to-engine/ . I have a following error there : psql: FATAL: Peer authentication failed for user "engine" Could you please guide my what method is the best and how should I correct my faults there? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Sun May 6 03:17:02 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Sun, 6 May 2018 06:17:02 +0300 Subject: [ovirt-users] oVirt messages from engine to vdsm In-Reply-To: References: Message-ID: I feel it's a question more suitable for the devel list. In any way, perhaps setting the Engine in debug mode would suffice? Y. On Sat, May 5, 2018 at 1:07 PM, Anastasiya Ruzhanskaya < anastasiya.ruzhanskaya at frtk.ru> wrote: > Hello everyone! > Currently I want to determine what information is included in messages > passing from oVirt engine to VDSM on ovirt-node. > > I made up a really simple configuration with one VM representing engine, > another - node, a managed to successfully launch a single VM on this node. > However, I have chosen to configure everything automatically. Currently > traffic is encrypted with default certificates. > So, there are three options for me and no one of them really works. > > 1) Find the format of messages ( what the fields are, session id for > example) in docs, but I didn't manage to find it; > 2) Use wireshark to decrypt the traffic and the apply maybe a json > -dissector to the decrypted data. I have tried many solutions ( thanks god > I have rsa private and public keys but there is another session key which > is generated every time engine starts to communicate with vdsm, which I > cannot get with the help of sslkeylog file or ld_preload technology. > Maybe someone knows the exact methodology how to do this correctly? > > 3) Turn off ssl in oVirt. It is simple to do that for vdsm, but for > engine, according to answers on oVirt site, I should do 2 requests to the > database. I was really surprised that psql was not installed by oVirt on my > system. How did it then created a default database? ( I have chosen to > create all locally and with default configurations). > I mean these two commands : https://www.ovirt.org/develop/ > developer-guide/vdsm/connecting-development-vdsm-to-engine/ . I have a > following error there : > psql: FATAL: Peer authentication failed for user "engine" > > Could you please guide my what method is the best and how should I correct > my faults there? > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Sun May 6 05:53:33 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Sun, 6 May 2018 08:53:33 +0300 Subject: [ovirt-users] Ovirt 3.5 Does not start ovirt-engine after restore In-Reply-To: References: Message-ID: On Fri, May 4, 2018 at 10:43 AM, Sandro Bonazzola wrote: > > > 2018-05-03 21:57 GMT+02:00 T?cito Chaves : > >> Hello guys! >> >> Could any of you help me with my manager's backup / restore? >> >> I followed the documentation to create my backup and restore on another >> server but I'm having trouble accessing the panel after the restore is >> complete. >> >> can anybody help me? >> >> The steps are listed in this file: http://paste.scsys.co.uk/577382 >> >> and just remembering, I'm using version 3.5 and the hostname is the same >> as the one I'm trying to migrate. >> > > > oVirt 3.5 reached End Of Life on December 2015 (https://lists.ovirt.org/ > pipermail/announce/2015-December/000213.html) > I would suggest to take the opportunity for moving to 4.2 which is current > supported version. > > > >> >> >> The unfortunate message I receive is the following: ovirt-engine: ERROR >> run: 532 Error: process terminated with status code 1 >> >> With this the service does not rise! >> > Please check/share engine.log and server.log from /var/log/ovirt-engine. Please see the following for how to try to debug engine startup problems, in case above ones are empty: http://lists.ovirt.org/pipermail/devel/2017-March/029835.html Best regards, > >> Thank you! >> >> T?cito >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > > SANDRO BONAZZOLA > > ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D > > Red Hat EMEA > > sbonazzo at redhat.com > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Didi -------------- next part -------------- An HTML attachment was scrubbed... URL: From samppah at neutraali.net Sun May 6 08:42:20 2018 From: samppah at neutraali.net (Samuli Heinonen) Date: Sun, 06 May 2018 11:42:20 +0300 Subject: [ovirt-users] Problems with OVN Message-ID: <5AEEBFEC.5070901@neutraali.net> Hi all, I'm building a home lab using oVirt+GlusterFS in hyperconverged(ish) setup. My setup consists of 2x nodes with ASRock H110M-STX motherboard, Intel Pentium G4560 3,5 GHz CPU and 16 GB RAM. Motherboard has integrated Intel Gigabit I219V LAN. At the moment I'm using RaspberryPi as Gluster arbiter node. Nodes are connected to basic "desktop switch" without any management available. Hardware is nowhere near perfect, but it get its job done and is enough for playing around. However I'm having problems getting OVN to work properly and I'm clueless where to look next. oVirt is setup like this: oVirt engine host oe / 10.0.1.101 oVirt hypervisor host o2 / 10.0.1.18 oVirt hypervisor host o3 / 10.0.1.21 OVN network 10.0.200.0/24 When I spin up a VM in o2 and o3 with IP address in network 10.0.1.0/24 everything works fine. VMs can interact between each other without any problems. Problems show up when I try to use OVN based network between virtual machines. If virtual machines are on same hypervisor then everything seems to work ok. But if I have virtual machine on hypervisor o2 and another one on hypervisor o3 then TCP connections doesn't work very well. UDP seems to be ok and it's possible to ping hosts, do dns & ntp queries and so on. Problem with TCP is that for example when taking SSH connection to another host at some point connection just hangs and most of the time it's not even possible to even log in before connectiong hangs. If I look into tcpdump at that point it looks like packets never reach destination. Also, if I have multiple connections, then all of them hang at the same time. I have tried switching off tx checksum and other similar settings, but it didn't make any difference. I'm suspecting that hardware is not good enough. Before investigating into new hardware I'd like to get some confirmation that everything is setup correctly. When setting up oVirt/OVN I had to run following undocumented command to get it working at all: vdsm-tool ovn-config 10.0.1.101 10.0.1.21 (oVirt engine IP, hypervisor IP). Especially this makes me think that I have missed some crucial part in configuration. On oVirt engine in /var/log/openvswitch/ovsdb-server-nb.log there are error messages: 2018-05-06T08:30:05.418Z|00913|stream_ssl|WARN|SSL_read: unexpected SSL connection close 2018-05-06T08:30:05.418Z|00914|jsonrpc|WARN|ssl:127.0.0.1:53152: receive error: Protocol error 2018-05-06T08:30:05.419Z|00915|reconnect|WARN|ssl:127.0.0.1:53152: connection dropped (Protocol error) To be honest, I'm not sure what's causing those error messages or are they related. I found out some bug reports stating that they are not critical. Any ideas what to do next or should I just get better hardware? :) Best regards, Samuli Heinonen From sradco at redhat.com Sun May 6 09:47:19 2018 From: sradco at redhat.com (Shirly Radco) Date: Sun, 6 May 2018 12:47:19 +0300 Subject: [ovirt-users] ansible failure during host install In-Reply-To: References: Message-ID: If you did not install the Openshift based metrics store you need to remove the /etc/ovirt-engine-metrics-config.yml file. It is only required while setting up the metrics store. This should fix you host install issue, ansible will skip the roles related to the metrics configuration on the host. If you are interested in setting up the metrics store you can have a look at https://www.ovirt.org/develop/release-management/features/metrics/metrics-store/ -- SHIRLY RADCO BI SeNIOR SOFTWARE ENGINEER Red Hat Israel TRIED. TESTED. TRUSTED. On Fri, May 4, 2018 at 1:20 AM, Young, Matthew (Numerical Algorithms Group) wrote: > I?ve gotten past that error with ?oc delete route/docker-registry? before > restart the playbook > > > > Matt > > > > *From:* Young, Matthew (Numerical Algorithms Group) > *Sent:* Thursday, May 3, 2018 4:02 PM > *To:* 'Shirly Radco' > *Cc:* users > *Subject:* RE: [ovirt-users] ansible failure during host install > > > > Ovirt-engine-metrics v1.1.3.4 > > > > /etc/ovirt-engine-metrics/config.yml: > > ovirt_env_name: data-science-sandbox > > fluentd_elasticsearch_host: ovirt-metrics.datasci.bp.com > > ovirt_elasticsearch_mounted_storage_path: /data > > > > I have not gotten the openshift based metrics store running. It is > consistently timing out while performing ?TASK [openshift_hosted : > Configure a passthrough route for docker-registry].? > > Failure summary: > > 1. Hosts: localhost > > Play: Create Hosted Resources - registry > > Task: Configure a passthrough route for docker-registry > > Message: {u'cmd': u'/usr/bin/oc replace -f > /tmp/docker-registry-DtGZqz --force -n default', u'returncode': 1, > u'results': {}, u'stderr': u'error: timed out waiting for the condition\n', > u'stdout': u'route "docker-registry" deleted\n'} > > > > Matt Young > > > > *From:* Shirly Radco > *Sent:* Thursday, April 12, 2018 4:04 PM > *To:* Young, Matthew (Numerical Algorithms Group) > *Cc:* users > *Subject:* Re: [ovirt-users] ansible failure during host install > > > > Hi, > > > > What version of ovirt-engine-metrics are you using? > > > > Did you already setup the OpenShift based metrics store to connect to? > > > > Have you configured /etc/ovirt-engine-metrics/config.yml? > > > > -- > > SHIRLY RADCO > > BI SENIOR SOFTWARE ENGINEER > > Red Hat Israel > > > TRIED. TESTED. TRUSTED. > > > > > > On Thu, Apr 12, 2018, 21:35 Young, Matthew (Numerical Algorithms Group) < > Matthew.Young3 at bp.com> wrote: > > Hello all, > > > > I have been working on adding a new host to an oVirt cluster. The > installation is failing in the ansible portion, specifically installing the > ansible elasticsearch ca cert. I have no experience with ansible. Does > anyone know what is causing this issue? Or how to create this ca cert? > > > > **** from ansible log**** > > ? > > 2018-04-12 09:32:18,114 p=56616 u=ovirt | TASK > [oVirt.ovirt-fluentd/fluentd-setup : Create fluentd.conf] > ***************** > > 2018-04-12 09:32:18,730 p=56616 u=ovirt | ok: [10.222.10.156] => { > > "changed": false, > > "checksum": "c9f88b60cd12ab8e3f3ffce1ae07654c89b06ef6", > > "gid": 0, > > "group": "root", > > "mode": "0640", > > "owner": "root", > > "path": "/etc/fluentd/fluent.conf", > > "secontext": "system_u:object_r:etc_t:s0", > > "size": 58, > > "state": "file", > > "uid": 0 > > } > > 2018-04-12 09:32:18,739 p=56616 u=ovirt | TASK > [oVirt.ovirt-fluentd/fluentd-setup : Install fluentd certificate] > ********* > > 2018-04-12 09:32:18,759 p=56616 u=ovirt | skipping: [10.222.10.156] => { > > "changed": false, > > "skip_reason": "Conditional result was False" > > } > > 2018-04-12 09:32:18,768 p=56616 u=ovirt | TASK > [oVirt.ovirt-fluentd/fluentd-setup : Install fluentd elasticsearch CA > certificate] *** > > 2018-04-12 09:32:18,810 p=56616 u=ovirt | fatal: [10.222.10.156]: FAILED! > => { > > "changed": false > > } > > > > MSG: > > > > src (or content) is required > > > > 2018-04-12 09:32:18,812 p=56616 u=ovirt | PLAY RECAP > ********************************************************************* > > 2018-04-12 09:32:18,812 p=56616 u=ovirt | 10.222.10.156 : > ok=33 changed=4 unreachable=0 failed=1 > > > > ************ > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sradco at redhat.com Sun May 6 09:47:59 2018 From: sradco at redhat.com (Shirly Radco) Date: Sun, 6 May 2018 12:47:59 +0300 Subject: [ovirt-users] ansible failure during host install In-Reply-To: References: Message-ID: -- SHIRLY RADCO BI SeNIOR SOFTWARE ENGINEER Red Hat Israel TRIED. TESTED. TRUSTED. On Sun, May 6, 2018 at 12:47 PM, Shirly Radco wrote: > If you did not install the Openshift based metrics store you need to > remove the /etc/ovirt-engine-metrics-config.yml file. > Sorry. Remove /etc/ovirt-engine-metrics/config.yml It is only required while setting up the metrics store. > > This should fix you host install issue, ansible will skip the roles > related to the metrics configuration on the host. > If you are interested in setting up the metrics store you can have a look > at https://www.ovirt.org/develop/release-management/ > features/metrics/metrics-store/ > > -- > > SHIRLY RADCO > > BI SeNIOR SOFTWARE ENGINEER > > Red Hat Israel > > TRIED. TESTED. TRUSTED. > > On Fri, May 4, 2018 at 1:20 AM, Young, Matthew (Numerical Algorithms > Group) wrote: > >> I?ve gotten past that error with ?oc delete route/docker-registry? before >> restart the playbook >> >> >> >> Matt >> >> >> >> *From:* Young, Matthew (Numerical Algorithms Group) >> *Sent:* Thursday, May 3, 2018 4:02 PM >> *To:* 'Shirly Radco' >> *Cc:* users >> *Subject:* RE: [ovirt-users] ansible failure during host install >> >> >> >> Ovirt-engine-metrics v1.1.3.4 >> >> >> >> /etc/ovirt-engine-metrics/config.yml: >> >> ovirt_env_name: data-science-sandbox >> >> fluentd_elasticsearch_host: ovirt-metrics.datasci.bp.com >> >> ovirt_elasticsearch_mounted_storage_path: /data >> >> >> >> I have not gotten the openshift based metrics store running. It is >> consistently timing out while performing ?TASK [openshift_hosted : >> Configure a passthrough route for docker-registry].? >> >> Failure summary: >> >> 1. Hosts: localhost >> >> Play: Create Hosted Resources - registry >> >> Task: Configure a passthrough route for docker-registry >> >> Message: {u'cmd': u'/usr/bin/oc replace -f >> /tmp/docker-registry-DtGZqz --force -n default', u'returncode': 1, >> u'results': {}, u'stderr': u'error: timed out waiting for the condition\n', >> u'stdout': u'route "docker-registry" deleted\n'} >> >> >> >> Matt Young >> >> >> >> *From:* Shirly Radco >> *Sent:* Thursday, April 12, 2018 4:04 PM >> *To:* Young, Matthew (Numerical Algorithms Group) >> *Cc:* users >> *Subject:* Re: [ovirt-users] ansible failure during host install >> >> >> >> Hi, >> >> >> >> What version of ovirt-engine-metrics are you using? >> >> >> >> Did you already setup the OpenShift based metrics store to connect to? >> >> >> >> Have you configured /etc/ovirt-engine-metrics/config.yml? >> >> >> >> -- >> >> SHIRLY RADCO >> >> BI SENIOR SOFTWARE ENGINEER >> >> Red Hat Israel >> >> >> TRIED. TESTED. TRUSTED. >> >> >> >> >> >> On Thu, Apr 12, 2018, 21:35 Young, Matthew (Numerical Algorithms Group) < >> Matthew.Young3 at bp.com> wrote: >> >> Hello all, >> >> >> >> I have been working on adding a new host to an oVirt cluster. The >> installation is failing in the ansible portion, specifically installing the >> ansible elasticsearch ca cert. I have no experience with ansible. Does >> anyone know what is causing this issue? Or how to create this ca cert? >> >> >> >> **** from ansible log**** >> >> ? >> >> 2018-04-12 09:32:18,114 p=56616 u=ovirt | TASK >> [oVirt.ovirt-fluentd/fluentd-setup : Create fluentd.conf] >> ***************** >> >> 2018-04-12 09:32:18,730 p=56616 u=ovirt | ok: [10.222.10.156] => { >> >> "changed": false, >> >> "checksum": "c9f88b60cd12ab8e3f3ffce1ae07654c89b06ef6", >> >> "gid": 0, >> >> "group": "root", >> >> "mode": "0640", >> >> "owner": "root", >> >> "path": "/etc/fluentd/fluent.conf", >> >> "secontext": "system_u:object_r:etc_t:s0", >> >> "size": 58, >> >> "state": "file", >> >> "uid": 0 >> >> } >> >> 2018-04-12 09:32:18,739 p=56616 u=ovirt | TASK >> [oVirt.ovirt-fluentd/fluentd-setup : Install fluentd certificate] >> ********* >> >> 2018-04-12 09:32:18,759 p=56616 u=ovirt | skipping: [10.222.10.156] => { >> >> "changed": false, >> >> "skip_reason": "Conditional result was False" >> >> } >> >> 2018-04-12 09:32:18,768 p=56616 u=ovirt | TASK >> [oVirt.ovirt-fluentd/fluentd-setup : Install fluentd elasticsearch CA >> certificate] *** >> >> 2018-04-12 09:32:18,810 p=56616 u=ovirt | fatal: [10.222.10.156]: >> FAILED! => { >> >> "changed": false >> >> } >> >> >> >> MSG: >> >> >> >> src (or content) is required >> >> >> >> 2018-04-12 09:32:18,812 p=56616 u=ovirt | PLAY RECAP >> ********************************************************************* >> >> 2018-04-12 09:32:18,812 p=56616 u=ovirt | 10.222.10.156 : >> ok=33 changed=4 unreachable=0 failed=1 >> >> >> >> ************ >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mperina at redhat.com Sun May 6 09:58:12 2018 From: mperina at redhat.com (Martin Perina) Date: Sun, 06 May 2018 09:58:12 +0000 Subject: [ovirt-users] managing local users in 4.2 ? In-Reply-To: <41286b17-6f9d-12fc-e197-0b468da3b9a5@meduniwien.ac.at> References: <95323c64-5dbf-0e46-f6da-b78113aaf0af@meduniwien.ac.at> <41286b17-6f9d-12fc-e197-0b468da3b9a5@meduniwien.ac.at> Message-ID: On Fri, 4 May 2018, 15:59 Matthias Leopold, < matthias.leopold at meduniwien.ac.at> wrote: > Am 2018-05-04 um 12:36 schrieb Matthias Leopold: > > Hi, > > > > i tried to create a local user in oVirt 4.2 with "ovirt-aaa-jdbc-tool > > user add" (like i did in oVirt 4.1.9). the command worked ok, but the > > created user wasn't visible in the web gui. i then used the "add" button > > in admin portal to add the already existing user and after that the user > > was visible. i didn't have to do that in 4.1.9, the "add" button was > > already there the, but i didn't know what to do with it. how did > > managing local users change in 4.2? > > > > ok, i got it: only after setting actual permissions for a user he/she > appears automatically in Admin Portal - Administration - Users. this was > different in 4.1.9 IIRC > Sorry, but that behavior didn't change since 3.5/3.6. Only users which has directly assigned a permission are listed there. But those users are visible in all Add Permission tabs right after creating by aaa-jdbc tool. > matthias > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yquinn at redhat.com Sun May 6 12:49:56 2018 From: yquinn at redhat.com (Yanir Quinn) Date: Sun, 6 May 2018 15:49:56 +0300 Subject: [ovirt-users] adding a host In-Reply-To: References: Message-ID: For removing the non operational host : 1.Right click on the host name 2.Click on "Confirm host has been rebooted" 3.Remove the host For the issue you are experiencing with host addition, according to the engine logs you have sent, you might need to perform a few steps , see: https://bugzilla.redhat.com/show_bug.cgi?id=1516256#c2 I would also recommend to check the the host's network is not down. Also, during installation of the host,observe the messages in the Events section (UI) Hope this helps. On Thu, May 3, 2018 at 10:07 PM, Justin Zygmont wrote: > I can?t seem to do anything to control the host from the engine, when I > select it for Maint, the engine log shows: > > > > [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] (default > task-50) [90ba81ef-21e4-4272-8c59-84786e969ff7] Running command: > MaintenanceNumberOfVdssCommand internal: false. Entities affected : ID: > 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDSAction group > MANIPULATE_HOST with role type ADMIN > > 2018-05-03 12:00:37,918-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > (default task-50) [90ba81ef-21e4-4272-8c59-84786e969ff7] START, > SetVdsStatusVDSCommand(HostName = ovnode102, > SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', > status='PreparingForMaintenance', nonOperationalReason='NONE', > stopSpmFailureLogged='true', maintenanceReason='null'}), log id: 647d5f78 > > > > > > I have only 1 host in the DC, status is Up, the cluster says host count is > 2 even though the second host stays Non Operational. I don?t know how to > remove it. > > I just installed and tried to join the DC, this is a fresh installation, > the engine was launched through cockpit. > > > > Heres what nodectl shows from the host: > > > > ovnode102 ~]# nodectl check > > Status: OK > > Bootloader ... OK > > Layer boot entries ... OK > > Valid boot entries ... OK > > Mount points ... OK > > Separate /var ... OK > > Discard is used ... OK > > Basic storage ... OK > > Initialized VG ... OK > > Initialized Thin Pool ... OK > > Initialized LVs ... OK > > Thin storage ... OK > > Checking available space in thinpool ... OK > > Checking thinpool auto-extend ... OK > > vdsmd ... OK > > > > > > > > Thanks, > > > > > > *From:* Yanir Quinn [mailto:yquinn at redhat.com] > *Sent:* Thursday, May 3, 2018 1:19 AM > > *To:* Justin Zygmont > *Cc:* users at ovirt.org > *Subject:* Re: [ovirt-users] adding a host > > > > Did you try switching the host to maintenance mode first ? > > What is the state of the data center and how many active hosts do you have > now? > > And did you perform any updates recently or just run a fresh installation > ? if so , did you run engine-setup before launching engine ? > > > > On Thu, May 3, 2018 at 12:47 AM, Justin Zygmont > wrote: > > I read this page and it doesn?t help since this is a host that can?t be > removed, the ?remove? button is dimmed out. > > > > This is 4.22 ovirt node, but the host stays in a ?non operational? state. > I notice the logs have a lot of errors, for example: > > > > > > the SERVER log: > > > > 2018-05-02 14:40:23,847-07 WARN [org.jboss.jca.core. > connectionmanager.pool.strategy.OnePool] (ForkJoinPool-1-worker-14) > IJ000609: Attempt to return connection twice: org.jboss.jca.core. > connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL > managed connection=org.jboss.jca.adapters.jdbc.local. > LocalManagedConnection at 3f37cf10 connection handles=0 > lastReturned=1525297223847 lastValidated=1525290267811 > lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core. > connectionmanager.pool.strategy.OnePool at 20550f35 mcp= > SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] > xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 > connectionManager=5bec70d2 warned=false currentXid=null > productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] > txSync=null]: java.lang.Throwable: STACKTRACE > > at org.jboss.jca.core.connectionmanager.pool.mcp. > SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection( > SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:722) > > at org.jboss.jca.core.connectionmanager.pool.mcp. > SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection( > SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:611) > > at org.jboss.jca.core.connectionmanager.pool. > AbstractPool.returnConnection(AbstractPool.java:847) > > at org.jboss.jca.core.connectionmanager.AbstractConnectionManager. > returnManagedConnection(AbstractConnectionManager.java:725) > > at org.jboss.jca.core.connectionmanager.tx. > TxConnectionManagerImpl.managedConnectionDisconnected( > TxConnectionManagerImpl.java:585) > > at org.jboss.jca.core.connectionmanager.AbstractConnectionManager. > disconnectManagedConnection(AbstractConnectionManager.java:988) > > at org.jboss.jca.core.connectionmanager.AbstractConnectionManager. > reconnectManagedConnection(AbstractConnectionManager.java:974) > > at org.jboss.jca.core.connectionmanager.AbstractConnectionManager. > allocateConnection(AbstractConnectionManager.java:792) > > at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection( > WrapperDataSource.java:138) > > at org.jboss.as.connector.subsystems.datasources. > WildFlyDataSource.getConnection(WildFlyDataSource.java:64) > > at org.springframework.jdbc.datasource.DataSourceUtils. > doGetConnection(DataSourceUtils.java:111) [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.datasource.DataSourceUtils. > getConnection(DataSourceUtils.java:77) [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$ > PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) > [dal.jar:] > > at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$ > PostgresSimpleJdbcCall.doExecute(PostgresDbEngineDialect.java:118) > [dal.jar:] > > at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler. > executeImpl(SimpleJdbcCallsHandler.java:135) [dal.jar:] > > at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler. > executeReadList(SimpleJdbcCallsHandler.java:105) [dal.jar:] > > at org.ovirt.engine.core.dao.VmDynamicDaoImpl.getAllRunningForVds(VmDynamicDaoImpl.java:52) > [dal.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.isVmRunningOnHost( > HostNetworkTopologyPersisterImpl.java:210) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.logChangedDisplayNetwork( > HostNetworkTopologyPersisterImpl.java:179) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.auditNetworkCompliance( > HostNetworkTopologyPersisterImpl.java:148) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.lambda$persistAndEnforceNetworkCompli > ance$0(HostNetworkTopologyPersisterImpl.java:100) [vdsbroker.jar:] > > at org.ovirt.engine.core.utils.transaction.TransactionSupport. > executeInNewTransaction(TransactionSupport.java:202) [utils.jar:] > > at org.ovirt.engine.core.utils.transaction.TransactionSupport. > executeInRequired(TransactionSupport.java:137) [utils.jar:] > > at org.ovirt.engine.core.utils.transaction.TransactionSupport. > executeInScope(TransactionSupport.java:105) [utils.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance( > HostNetworkTopologyPersisterImpl.java:93) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance( > HostNetworkTopologyPersisterImpl.java:154) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.VdsManager. > processRefreshCapabilitiesResponse(VdsManager.java:794) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.VdsManager. > handleRefreshCapabilitiesResponse(VdsManager.java:598) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.VdsManager. > refreshHostSync(VdsManager.java:567) [vdsbroker.jar:] > > at org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand > .executeCommand(RefreshHostCapabilitiesCommand.java:41) [bll.jar:] > > at org.ovirt.engine.core.bll.CommandBase. > executeWithoutTransaction(CommandBase.java:1133) [bll.jar:] > > at org.ovirt.engine.core.bll.CommandBase. > executeActionInTransactionScope(CommandBase.java:1285) [bll.jar:] > > at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1934) > [bll.jar:] > > at org.ovirt.engine.core.utils.transaction.TransactionSupport. > executeInSuppressed(TransactionSupport.java:164) [utils.jar:] > > at org.ovirt.engine.core.utils.transaction.TransactionSupport. > executeInScope(TransactionSupport.java:103) [utils.jar:] > > at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1345) > [bll.jar:] > > at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:400) > [bll.jar:] > > at org.ovirt.engine.core.bll.executor. > DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13) > [bll.jar:] > > at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:468) > [bll.jar:] > > at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:450) > [bll.jar:] > > at org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:393) > [bll.jar:] > > at sun.reflect.GeneratedMethodAccessor158.invoke(Unknown Source) > [:1.8.0_161] > > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] > > at java.lang.reflect.Method.invoke(Method.java:498) > [rt.jar:1.8.0_161] > > at org.jboss.as.ee.component.ManagedReferenceMethodIntercep > tor.processInvocation(ManagedReferenceMethodInterceptor.java:52) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.InterceptorContext$Invocation. > proceed(InterceptorContext.java:509) > > at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor. > delegateInterception(Jsr299BindingsInterceptor.java:78) > > at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor. > doMethodInterception(Jsr299BindingsInterceptor.java:88) > > at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor. > processInvocation(Jsr299BindingsInterceptor.java:101) > > at org.jboss.as.ee.component.interceptors. > UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.invocationmetrics. > ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor. > processInvocation(ConcurrentContextInterceptor.java:45) > [wildfly-ee-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.InitialInterceptor.processInvocation( > InitialInterceptor.java:40) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.ChainedInterceptor.processInvocation( > ChainedInterceptor.java:53) > > at org.jboss.as.ee.component.interceptors. > ComponentDispatcherInterceptor.processInvocation( > ComponentDispatcherInterceptor.java:52) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.singleton. > SingletonComponentInstanceAssociationInterceptor.processInvocation( > SingletonComponentInstanceAssociationInterceptor.java:53) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:264) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:379) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:244) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.InterceptorContext$Invocation. > proceed(InterceptorContext.java:509) > > at org.jboss.weld.ejb.AbstractEJBRequestScopeActivat > ionInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > > at org.jboss.as.weld.ejb.EjbRequestScopeActivationInter > ceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.interceptors. > CurrentInvocationContextInterceptor.processInvocation( > CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-11.0.0.Final. > jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.invocationmetrics. > WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.security.SecurityContextInterceptor. > processInvocation(SecurityContextInterceptor.java:100) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.deployment.processors. > StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.interceptors. > ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor. > processInvocation(LoggingInterceptor.java:67) [wildfly-ejb3-11.0.0.Final. > jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ee.component.NamespaceContextInterceptor. > processInvocation(NamespaceContextInterceptor.java:50) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.ContextClassLoaderInterceptor. > processInvocation(ContextClassLoaderInterceptor.java:60) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.InterceptorContext.run( > InterceptorContext.java:438) > > at org.wildfly.security.manager.WildFlySecurityManager.doChecked( > WildFlySecurityManager.java:609) > > at org.jboss.invocation.AccessCheckingInterceptor. > processInvocation(AccessCheckingInterceptor.java:57) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.ChainedInterceptor.processInvocation( > ChainedInterceptor.java:53) > > at org.jboss.as.ee.component.ViewService$View.invoke( > ViewService.java:198) > > at org.jboss.as.ee.component.ViewDescription$1.processInvocation( > ViewDescription.java:185) > > at org.jboss.as.ee.component.ProxyInvocationHandler.invoke( > ProxyInvocationHandler.java:81) > > at org.ovirt.engine.core.bll.interfaces.BackendInternal$$$ > view4.runInternalAction(Unknown Source) [bll.jar:] > > at sun.reflect.GeneratedMethodAccessor157.invoke(Unknown Source) > [:1.8.0_161] > > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] > > at java.lang.reflect.Method.invoke(Method.java:498) > [rt.jar:1.8.0_161] > > at org.jboss.weld.util.reflection.Reflections. > invokeAndUnwrap(Reflections.java:433) [weld-core-impl-2.4.3.Final. > jar:2.4.3.Final] > > at org.jboss.weld.bean.proxy.EnterpriseBeanProxyMethodHandl > er.invoke(EnterpriseBeanProxyMethodHandler.java:127) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > > at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke( > EnterpriseTargetBeanInstance.java:56) [weld-core-impl-2.4.3.Final. > jar:2.4.3.Final] > > at org.jboss.weld.bean.proxy.InjectionPointPropagatingEnter > priseTargetBeanInstance.invoke(InjectionPointPropagatingEnter > priseTargetBeanInstance.java:67) [weld-core-impl-2.4.3.Final. > jar:2.4.3.Final] > > at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:100) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > > at org.ovirt.engine.core.bll.BackendCommandObjectsHandler$ > BackendInternal$BackendLocal$2049259618$Proxy$_$$_Weld$EnterpriseProxy$.runInternalAction(Unknown > Source) [bll.jar:] > > at org.ovirt.engine.core.bll.VdsEventListener. > refreshHostCapabilities(VdsEventListener.java:598) [bll.jar:] > > at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$ > SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext( > HostConnectionRefresher.java:47) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$ > SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext( > HostConnectionRefresher.java:30) [vdsbroker.jar:] > > at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$ > EventCallable.call(EventPublisher.java:118) [vdsm-jsonrpc-java-client.jar: > ] > > at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$ > EventCallable.call(EventPublisher.java:93) [vdsm-jsonrpc-java-client.jar:] > > at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424) > [rt.jar:1.8.0_161] > > at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) > [rt.jar:1.8.0_161] > > at java.util.concurrent.ForkJoinPool$WorkQueue. > runTask(ForkJoinPool.java:1056) [rt.jar:1.8.0_161] > > at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) > [rt.jar:1.8.0_161] > > at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) > [rt.jar:1.8.0_161] > > > > 2018-05-02 14:40:23,851-07 WARN [com.arjuna.ats.arjuna] > (ForkJoinPool-1-worker-14) ARJUNA012077: Abort called on already aborted > atomic action 0:ffff7f000001:-21bd8800:5ae90c48:10afa > > > > > > > > > > > > And the ENGINE log: > > > > 2018-05-02 14:40:23,851-07 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] > (ForkJoinPool-1-worker-14) [52276df5] transaction rolled back > > 2018-05-02 14:40:23,851-07 ERROR [org.ovirt.engine.core.vdsbroker.VdsManager] > (ForkJoinPool-1-worker-14) [52276df5] Unable to RefreshCapabilities: > IllegalStateException: Transaction Local transaction > (delegate=TransactionImple < ac, BasicAction: 0:ffff7f000001:-21bd8800:5ae90c48:10afa > status: ActionStatus.ABORTED >, owner=Local transaction context for > provider JBoss JTA transaction provider) is not active STATUS_ROLLEDBACK > > 2018-05-02 14:40:23,888-07 INFO [org.ovirt.engine.core.bll. > HandleVdsCpuFlagsOrClusterChangedCommand] (ForkJoinPool-1-worker-14) > [5c511e51] Running command: HandleVdsCpuFlagsOrClusterChangedCommand > internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 > Type: VDS > > 2018-05-02 14:40:23,895-07 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] > (ForkJoinPool-1-worker-14) [2a0ec90b] Running command: > HandleVdsVersionCommand internal: true. Entities affected : ID: > 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS > > 2018-05-02 14:40:23,898-07 INFO [org.ovirt.engine.core.bll. > RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] > Refresh host capabilities finished. Lock released. Monitoring can run now > for host 'ovnode102 from data-center 'Default' > > 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll. > RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] > Command 'org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand' > failed: Could not get JDBC Connection; nested exception is > java.sql.SQLException: javax.resource.ResourceException: IJ000457: > Unchecked throwable in managedConnectionReconnected() > cl=org.jboss.jca.core.connectionmanager.listener. > TxConnectionListener at 1fab84d7[state=NORMAL managed > connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection@ > 3f37cf10 connection handles=0 lastReturned=1525297223847 > lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false > pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 > mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] > xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 > connectionManager=5bec70d2 warned=false currentXid=null > productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] > txSync=null] > > 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll. > RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] > Exception: org.springframework.jdbc.CannotGetJdbcConnectionException: > Could not get JDBC Connection; nested exception is java.sql.SQLException: > javax.resource.ResourceException: IJ000457: Unchecked throwable in > managedConnectionReconnected() cl=org.jboss.jca.core. > connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL > managed connection=org.jboss.jca.adapters.jdbc.local. > LocalManagedConnection at 3f37cf10 connection handles=0 > lastReturned=1525297223847 lastValidated=1525290267811 > lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core. > connectionmanager.pool.strategy.OnePool at 20550f35 mcp= > SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] > xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 > connectionManager=5bec70d2 warned=false currentXid=null > productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] > txSync=null] > > at org.springframework.jdbc.datasource.DataSourceUtils. > getConnection(DataSourceUtils.java:80) [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$ > PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) > [dal.jar:] > > . > > . > > . > > . > > 2018-05-02 14:40:23,907-07 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-14) > [2a0ec90b] EVENT_ID: HOST_REFRESH_CAPABILITIES_FAILED(607), Failed to > refresh the capabilities of host ovnode102. > > 2018-05-02 14:40:23,907-07 INFO [org.ovirt.engine.core.bll. > RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] > Lock freed to object 'EngineLock:{exclusiveLocks='[ > 74dfe965-cb11-495a-96a0-3dae6b3cbd75=VDS, HOST_NETWORK74dfe965-cb11- > 495a-96a0-3dae6b3cbd75=HOST_NETWORK]', sharedLocks=''}' > > 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] START, > GetHardwareInfoAsyncVDSCommand(HostName = ovnode102, > VdsIdAndVdsVDSCommandParametersBase:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', > vds='Host[ovnode102,74dfe965-cb11-495a-96a0-3dae6b3cbd75]'}), log id: > 300f7345 > > 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] FINISH, > GetHardwareInfoAsyncVDSCommand, log id: 300f7345 > > 2018-05-02 14:40:25,802-07 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] Running > command: SetNonOperationalVdsCommand internal: true. Entities affected : > ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS > > 2018-05-02 14:40:25,805-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] START, > SetVdsStatusVDSCommand(HostName = ovnode102., > SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', > status='NonOperational', nonOperationalReason='NETWORK_UNREACHABLE', > stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 7611d8d8 > > 2018-05-02 14:40:56,722-07 INFO [org.ovirt.engine.core.bll. > provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) > [33bdda7f] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ > f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' > > 2018-05-02 14:40:56,732-07 INFO [org.ovirt.engine.core.bll. > provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) > [33bdda7f] Running command: SyncNetworkProviderCommand internal: true. > > 2018-05-02 14:40:56,844-07 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] > (default task-40) [] User admin at internal successfully logged in with > scopes: ovirt-app-api ovirt-ext=token-info:authz-search > ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate > ovirt-ext=token:password-access > > 2018-05-02 14:40:57,001-07 INFO [org.ovirt.engine.core.bll. > provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) > [33bdda7f] Lock freed to object 'EngineLock:{exclusiveLocks='[ > f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'default' is using 0 threads out of 1 and 5 tasks are waiting in the queue. > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'engine' is using 0 threads out of 500, 8 threads waiting for tasks and 0 > tasks in queue. > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'engineScheduled' is using 1 threads out of 100 and 99 tasks are waiting in > the queue. > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'engineThreadMonitoring' is using 1 threads out of 1 and 0 tasks are > waiting in the queue. > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'hostUpdatesChecker' is using 0 threads out of 5 and 1 tasks are waiting in > the queue. > > > > > > > > > > > > > > > > > > > > > > > > *From:* Yanir Quinn [mailto:yquinn at redhat.com] > *Sent:* Wednesday, May 2, 2018 12:34 AM > *To:* Justin Zygmont > *Cc:* users at ovirt.org > *Subject:* Re: [ovirt-users] adding a host > > > > Hi, > > What document are you using ? > > See if you find the needed information here : https://ovirt.org/ > documentation/admin-guide/chap-Hosts/ > > > > > For engine related potential errors i recommend also checking the > engine.log and in UI check the events section. > > Regards, > > Yanir Quinn > > > > On Tue, May 1, 2018 at 11:11 PM, Justin Zygmont > wrote: > > I have tried to add a host to the engine and it just takes forever never > working or giving any error message. When I look in the engine?s > server.log I see it says the networks are missing. > > I thought when you install a node and add it to the engine it will add the > networks automatically? The docs don?t give much information about this, > and I can?t even remove the host through the UI. What steps are required > to prepare a node when several vlans are involved? > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Sun May 6 13:00:49 2018 From: rightkicktech at gmail.com (Alex K) Date: Sun, 06 May 2018 13:00:49 +0000 Subject: [ovirt-users] managing local users in 4.2 ? In-Reply-To: References: <95323c64-5dbf-0e46-f6da-b78113aaf0af@meduniwien.ac.at> <41286b17-6f9d-12fc-e197-0b468da3b9a5@meduniwien.ac.at> Message-ID: Indeed the behaviour has not changed. When adding users at 4.1 with ovirt-aaa-jdbc-tool i still need to add them through gui for users to be listed in gui. Alex On Sun, May 6, 2018, 12:58 Martin Perina wrote: > > > On Fri, 4 May 2018, 15:59 Matthias Leopold, < > matthias.leopold at meduniwien.ac.at> wrote: > >> Am 2018-05-04 um 12:36 schrieb Matthias Leopold: >> > Hi, >> > >> > i tried to create a local user in oVirt 4.2 with "ovirt-aaa-jdbc-tool >> > user add" (like i did in oVirt 4.1.9). the command worked ok, but the >> > created user wasn't visible in the web gui. i then used the "add" >> button >> > in admin portal to add the already existing user and after that the >> user >> > was visible. i didn't have to do that in 4.1.9, the "add" button was >> > already there the, but i didn't know what to do with it. how did >> > managing local users change in 4.2? >> > >> >> ok, i got it: only after setting actual permissions for a user he/she >> appears automatically in Admin Portal - Administration - Users. this was >> different in 4.1.9 IIRC >> > > Sorry, but that behavior didn't change since 3.5/3.6. Only users which has > directly assigned a permission are listed there. But those users are > visible in all Add Permission tabs right after creating by aaa-jdbc tool. > > >> matthias >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahino at redhat.com Sun May 6 14:26:27 2018 From: ahino at redhat.com (Ala Hino) Date: Sun, 6 May 2018 17:26:27 +0300 Subject: [ovirt-users] problem to create snapshot In-Reply-To: References: Message-ID: [Please always CC ovirt-users so other engineer can provide help] It seems that the storage domain is corrupted. Can you please run the following command and send the output? vdsm-client StorageDomain getInfo storagedomainID= You may need to move the storage to maintenance and re-initialize it. On Thu, May 3, 2018 at 10:10 PM, Marcelo Leandro wrote: > Hello, > > Thank you for reply: > > oVirt Version - 4.1.9 > Vdsm Versoin - 4.20.23 > > attached logs, > > Very Thanks. > > Marcelo Leandro > > 2018-05-03 15:59 GMT-03:00 Ala Hino : > >> Can you please share more info? >> - The version you are using >> - Full log of vdsm and the engine >> >> Is the VM running or down while creating the snapshot? >> >> On Thu, May 3, 2018 at 8:32 PM, Marcelo Leandro >> wrote: >> >>> Anyone help me? >>> >>> 2018-05-02 17:55 GMT-03:00 Marcelo Leandro : >>> >>>> Hello , >>>> >>>> I am geting error when try do a snapshot: >>>> >>>> Error msg in SPM log. >>>> >>>> 2018-05-02 17:46:11,235-0300 WARN (tasks/2) [storage.ResourceManager] >>>> Resource factory failed to create resource '01_img_6e5cce71-3438-4045-9d5 >>>> 4-607123e0557e.ed7f1c0f-5986-4979-b783-5c465b0854c6'. Canceling >>>> request. (resourceManager:543) >>>> Traceback (most recent call last): >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", >>>> line 539, in registerResource >>>> obj = namespaceObj.factory.createResource(name, lockType) >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", >>>> line 193, in createResource >>>> lockType) >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", >>>> line 122, in __getResourceCandidatesList >>>> imgUUID=resourceName) >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line >>>> 213, in getChain >>>> if srcVol.isLeaf(): >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line >>>> 1430, in isLeaf >>>> return self._manifest.isLeaf() >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line >>>> 138, in isLeaf >>>> return self.getVolType() == sc.type2name(sc.LEAF_VOL) >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line >>>> 134, in getVolType >>>> self.voltype = self.getMetaParam(sc.VOLTYPE) >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line >>>> 118, in getMetaParam >>>> meta = self.getMetadata() >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", >>>> line 112, in getMetadata >>>> md = VolumeMetadata.from_lines(lines) >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/volumemetadata.py", >>>> line 103, in from_lines >>>> "Missing metadata key: %s: found: %s" % (e, md)) >>>> MetaDataKeyNotFoundError: Meta Data key not found error: ("Missing >>>> metadata key: 'DOMAIN': found: {'NONE': '############################# >>>> ############################################################ >>>> ############################################################ >>>> ############################################################ >>>> ############################################################ >>>> ############################################################ >>>> ############################################################ >>>> ############################################################ >>>> #####################################################'}",) >>>> 2018-05-02 17:46:11,286-0300 WARN (tasks/2) >>>> [storage.ResourceManager.Request] (ResName='01_img_6e5cce71-3438 >>>> -4045-9d54-607123e0557e.ed7f1c0f-5986-4979-b783-5c465b0854c6', >>>> ReqID='a3cd9388-977b-45b9-9aa0-e431aeff8750') Tried to cancel a >>>> processed request (resourceManager:187) >>>> 2018-05-02 17:46:11,286-0300 ERROR (tasks/2) [storage.TaskManager.Task] >>>> (Task='ba0766ca-08a1-4d65-a4e9-1e0171939037') Unexpected error >>>> (task:875) >>>> Traceback (most recent call last): >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line >>>> 882, in _run >>>> return fn(*args, **kargs) >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line >>>> 336, in run >>>> return self.cmd(*self.argslist, **self.argsdict) >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", >>>> line 79, in wrapper >>>> return method(self, *args, **kwargs) >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line >>>> 1938, in createVolume >>>> with rm.acquireResource(img_ns, imgUUID, rm.EXCLUSIVE): >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", >>>> line 1025, in acquireResource >>>> return _manager.acquireResource(namespace, name, lockType, >>>> timeout=timeout) >>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", >>>> line 475, in acquireResource >>>> raise se.ResourceAcqusitionFailed() >>>> ResourceAcqusitionFailed: Could not acquire resource. Probably resource >>>> factory threw an exception.: () >>>> >>>> >>>> Anyone help? >>>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alkaplan at redhat.com Sun May 6 14:31:05 2018 From: alkaplan at redhat.com (Alona Kaplan) Date: Sun, 6 May 2018 17:31:05 +0300 Subject: [ovirt-users] adding a host In-Reply-To: References: Message-ID: There was a bug when adding a host to a cluster that contains a required network. It was fixed in 4.2.3.4. Bug-Url- https://bugzilla.redhat.com/1570388 Thanks, Alona. On Sun, May 6, 2018 at 3:49 PM, Yanir Quinn wrote: > For removing the non operational host : > 1.Right click on the host name > 2.Click on "Confirm host has been rebooted" > 3.Remove the host > > > For the issue you are experiencing with host addition, according to the > engine logs you have sent, you might need to perform a few steps , see: > https://bugzilla.redhat.com/show_bug.cgi?id=1516256#c2 > > I would also recommend to check the the host's network is not down. > Also, during installation of the host,observe the messages in the Events > section (UI) > > Hope this helps. > > > > > On Thu, May 3, 2018 at 10:07 PM, Justin Zygmont > wrote: > >> I can?t seem to do anything to control the host from the engine, when I >> select it for Maint, the engine log shows: >> >> >> >> [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] (default >> task-50) [90ba81ef-21e4-4272-8c59-84786e969ff7] Running command: >> MaintenanceNumberOfVdssCommand internal: false. Entities affected : ID: >> 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDSAction group >> MANIPULATE_HOST with role type ADMIN >> >> 2018-05-03 12:00:37,918-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> (default task-50) [90ba81ef-21e4-4272-8c59-84786e969ff7] START, >> SetVdsStatusVDSCommand(HostName = ovnode102, >> SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', >> status='PreparingForMaintenance', nonOperationalReason='NONE', >> stopSpmFailureLogged='true', maintenanceReason='null'}), log id: 647d5f78 >> >> >> >> >> >> I have only 1 host in the DC, status is Up, the cluster says host count >> is 2 even though the second host stays Non Operational. I don?t know how >> to remove it. >> >> I just installed and tried to join the DC, this is a fresh installation, >> the engine was launched through cockpit. >> >> >> >> Heres what nodectl shows from the host: >> >> >> >> ovnode102 ~]# nodectl check >> >> Status: OK >> >> Bootloader ... OK >> >> Layer boot entries ... OK >> >> Valid boot entries ... OK >> >> Mount points ... OK >> >> Separate /var ... OK >> >> Discard is used ... OK >> >> Basic storage ... OK >> >> Initialized VG ... OK >> >> Initialized Thin Pool ... OK >> >> Initialized LVs ... OK >> >> Thin storage ... OK >> >> Checking available space in thinpool ... OK >> >> Checking thinpool auto-extend ... OK >> >> vdsmd ... OK >> >> >> >> >> >> >> >> Thanks, >> >> >> >> >> >> *From:* Yanir Quinn [mailto:yquinn at redhat.com] >> *Sent:* Thursday, May 3, 2018 1:19 AM >> >> *To:* Justin Zygmont >> *Cc:* users at ovirt.org >> *Subject:* Re: [ovirt-users] adding a host >> >> >> >> Did you try switching the host to maintenance mode first ? >> >> What is the state of the data center and how many active hosts do you >> have now? >> >> And did you perform any updates recently or just run a fresh installation >> ? if so , did you run engine-setup before launching engine ? >> >> >> >> On Thu, May 3, 2018 at 12:47 AM, Justin Zygmont >> wrote: >> >> I read this page and it doesn?t help since this is a host that can?t be >> removed, the ?remove? button is dimmed out. >> >> >> >> This is 4.22 ovirt node, but the host stays in a ?non operational? >> state. I notice the logs have a lot of errors, for example: >> >> >> >> >> >> the SERVER log: >> >> >> >> 2018-05-02 14:40:23,847-07 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] >> (ForkJoinPool-1-worker-14) IJ000609: Attempt to return connection twice: >> org.jboss.jca.core.connectionmanager.listener.TxConnectionLi >> stener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapt >> ers.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 >> lastReturned=1525297223847 lastValidated=1525290267811 >> lastCheckedOut=1525296923770 trackByTx=false >> pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 >> mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] >> xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 >> connectionManager=5bec70d2 warned=false currentXid=null >> productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] >> txSync=null]: java.lang.Throwable: STACKTRACE >> >> at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcu >> rrentLinkedDequeManagedConnectionPool.returnConnection(Semap >> horeConcurrentLinkedDequeManagedConnectionPool.java:722) >> >> at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcu >> rrentLinkedDequeManagedConnectionPool.returnConnection(Semap >> horeConcurrentLinkedDequeManagedConnectionPool.java:611) >> >> at org.jboss.jca.core.connectionmanager.pool.AbstractPool. >> returnConnection(AbstractPool.java:847) >> >> at org.jboss.jca.core.connectionmanager.AbstractConnectionManag >> er.returnManagedConnection(AbstractConnectionManager.java:725) >> >> at org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerI >> mpl.managedConnectionDisconnected(TxConnectionManagerImpl.java:585) >> >> at org.jboss.jca.core.connectionmanager.AbstractConnectionManag >> er.disconnectManagedConnection(AbstractConnectionManager.java:988) >> >> at org.jboss.jca.core.connectionmanager.AbstractConnectionManag >> er.reconnectManagedConnection(AbstractConnectionManager.java:974) >> >> at org.jboss.jca.core.connectionmanager.AbstractConnectionManag >> er.allocateConnection(AbstractConnectionManager.java:792) >> >> at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection( >> WrapperDataSource.java:138) >> >> at org.jboss.as.connector.subsystems.datasources.WildFlyDataSou >> rce.getConnection(WildFlyDataSource.java:64) >> >> at org.springframework.jdbc.datasource.DataSourceUtils.doGetCon >> nection(DataSourceUtils.java:111) [spring-jdbc.jar:4.3.9.RELEASE] >> >> at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:77) >> [spring-jdbc.jar:4.3.9.RELEASE] >> >> at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) >> [spring-jdbc.jar:4.3.9.RELEASE] >> >> at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) >> [spring-jdbc.jar:4.3.9.RELEASE] >> >> at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) >> [spring-jdbc.jar:4.3.9.RELEASE] >> >> at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) >> [spring-jdbc.jar:4.3.9.RELEASE] >> >> at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$P >> ostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) >> [dal.jar:] >> >> at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$P >> ostgresSimpleJdbcCall.doExecute(PostgresDbEngineDialect.java:118) >> [dal.jar:] >> >> at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198) >> [spring-jdbc.jar:4.3.9.RELEASE] >> >> at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.ex >> ecuteImpl(SimpleJdbcCallsHandler.java:135) [dal.jar:] >> >> at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.ex >> ecuteReadList(SimpleJdbcCallsHandler.java:105) [dal.jar:] >> >> at org.ovirt.engine.core.dao.VmDynamicDaoImpl.getAllRunningForVds(VmDynamicDaoImpl.java:52) >> [dal.jar:] >> >> at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopolog >> yPersisterImpl.isVmRunningOnHost(HostNetworkTopologyPersisterImpl.java:210) >> [vdsbroker.jar:] >> >> at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopolog >> yPersisterImpl.logChangedDisplayNetwork(HostNetworkTopologyPersisterImpl.java:179) >> [vdsbroker.jar:] >> >> at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopolog >> yPersisterImpl.auditNetworkCompliance(HostNetworkTopologyPersisterImpl.java:148) >> [vdsbroker.jar:] >> >> at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopolog >> yPersisterImpl.lambda$persistAndEnforceNetworkCompliance$0(H >> ostNetworkTopologyPersisterImpl.java:100) [vdsbroker.jar:] >> >> at org.ovirt.engine.core.utils.transaction.TransactionSupport.e >> xecuteInNewTransaction(TransactionSupport.java:202) [utils.jar:] >> >> at org.ovirt.engine.core.utils.transaction.TransactionSupport.e >> xecuteInRequired(TransactionSupport.java:137) [utils.jar:] >> >> at org.ovirt.engine.core.utils.transaction.TransactionSupport.e >> xecuteInScope(TransactionSupport.java:105) [utils.jar:] >> >> at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopolog >> yPersisterImpl.persistAndEnforceNetworkCompliance(HostNetwor >> kTopologyPersisterImpl.java:93) [vdsbroker.jar:] >> >> at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopolog >> yPersisterImpl.persistAndEnforceNetworkCompliance(HostNetwor >> kTopologyPersisterImpl.java:154) [vdsbroker.jar:] >> >> at org.ovirt.engine.core.vdsbroker.VdsManager.processRefreshCap >> abilitiesResponse(VdsManager.java:794) [vdsbroker.jar:] >> >> at org.ovirt.engine.core.vdsbroker.VdsManager.handleRefreshCapa >> bilitiesResponse(VdsManager.java:598) [vdsbroker.jar:] >> >> at org.ovirt.engine.core.vdsbroker.VdsManager.refreshHostSync(VdsManager.java:567) >> [vdsbroker.jar:] >> >> at org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand. >> executeCommand(RefreshHostCapabilitiesCommand.java:41) [bll.jar:] >> >> at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1133) >> [bll.jar:] >> >> at org.ovirt.engine.core.bll.CommandBase.executeActionInTransac >> tionScope(CommandBase.java:1285) [bll.jar:] >> >> at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1934) >> [bll.jar:] >> >> at org.ovirt.engine.core.utils.transaction.TransactionSupport.e >> xecuteInSuppressed(TransactionSupport.java:164) [utils.jar:] >> >> at org.ovirt.engine.core.utils.transaction.TransactionSupport.e >> xecuteInScope(TransactionSupport.java:103) [utils.jar:] >> >> at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1345) >> [bll.jar:] >> >> at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:400) >> [bll.jar:] >> >> at org.ovirt.engine.core.bll.executor.DefaultBackendActionExecu >> tor.execute(DefaultBackendActionExecutor.java:13) [bll.jar:] >> >> at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:468) >> [bll.jar:] >> >> at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:450) >> [bll.jar:] >> >> at org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:393) >> [bll.jar:] >> >> at sun.reflect.GeneratedMethodAccessor158.invoke(Unknown Source) >> [:1.8.0_161] >> >> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> [rt.jar:1.8.0_161] >> >> at java.lang.reflect.Method.invoke(Method.java:498) >> [rt.jar:1.8.0_161] >> >> at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor. >> processInvocation(ManagedReferenceMethodInterceptor.java:52) >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.invocation.InterceptorContext$Invocation.proceed( >> InterceptorContext.java:509) >> >> at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.del >> egateInterception(Jsr299BindingsInterceptor.java:78) >> >> at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.doM >> ethodInterception(Jsr299BindingsInterceptor.java:88) >> >> at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.pro >> cessInvocation(Jsr299BindingsInterceptor.java:101) >> >> at org.jboss.as.ee.component.interceptors.UserInterceptorFactor >> y$1.processInvocation(UserInterceptorFactory.java:63) >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeI >> nterceptor.processInvocation(ExecutionTimeInterceptor.java:43) >> [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.proc >> essInvocation(ConcurrentContextInterceptor.java:45) >> [wildfly-ee-11.0.0.Final.jar:11.0.0.Final] >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.invocation.InitialInterceptor.processInvocation(In >> itialInterceptor.java:40) >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.invocation.ChainedInterceptor.processInvocation(Ch >> ainedInterceptor.java:53) >> >> at org.jboss.as.ee.component.interceptors.ComponentDispatcherIn >> terceptor.processInvocation(ComponentDispatcherInterceptor.java:52) >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.as.ejb3.component.singleton.SingletonComponentInst >> anceAssociationInterceptor.processInvocation(SingletonCompon >> entInstanceAssociationInterceptor.java:53) [wildfly-ejb3-11.0.0.Final.jar >> :11.0.0.Final] >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:264) >> [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] >> >> at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:379) >> [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] >> >> at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:244) >> [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.invocation.InterceptorContext$Invocation.proceed( >> InterceptorContext.java:509) >> >> at org.jboss.weld.ejb.AbstractEJBRequestScopeActivationIntercep >> tor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73) >> [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] >> >> at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor. >> processInvocation(EjbRequestScopeActivationInterceptor.java:89) >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.as.ejb3.component.interceptors.CurrentInvocationCo >> ntextInterceptor.processInvocation(CurrentInvoc >> ationContextInterceptor.java:41) [wildfly-ejb3-11.0.0.Final.jar >> :11.0.0.Final] >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterc >> eptor.processInvocation(WaitTimeInterceptor.java:47) >> [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.as.ejb3.security.SecurityContextInterceptor.proces >> sInvocation(SecurityContextInterceptor.java:100) >> [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.as.ejb3.deployment.processors.StartupAwaitIntercep >> tor.processInvocation(StartupAwaitInterceptor.java:22) >> [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptor >> Factory$1.processInvocation(ShutDownInterceptorFactory.java:64) >> [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor. >> processInvocation(LoggingInterceptor.java:67) >> [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.as.ee.component.NamespaceContextInterceptor.proces >> sInvocation(NamespaceContextInterceptor.java:50) >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.invocation.ContextClassLoaderInterceptor.processIn >> vocation(ContextClassLoaderInterceptor.java:60) >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.invocation.InterceptorContext.run(InterceptorConte >> xt.java:438) >> >> at org.wildfly.security.manager.WildFlySecurityManager.doChecke >> d(WildFlySecurityManager.java:609) >> >> at org.jboss.invocation.AccessCheckingInterceptor.processInvoca >> tion(AccessCheckingInterceptor.java:57) >> >> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC >> ontext.java:422) >> >> at org.jboss.invocation.ChainedInterceptor.processInvocation(Ch >> ainedInterceptor.java:53) >> >> at org.jboss.as.ee.component.ViewService$View.invoke(ViewServic >> e.java:198) >> >> at org.jboss.as.ee.component.ViewDescription$1.processInvocatio >> n(ViewDescription.java:185) >> >> at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(Prox >> yInvocationHandler.java:81) >> >> at org.ovirt.engine.core.bll.interfaces.BackendInternal$$$view4.runInternalAction(Unknown >> Source) [bll.jar:] >> >> at sun.reflect.GeneratedMethodAccessor157.invoke(Unknown Source) >> [:1.8.0_161] >> >> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> [rt.jar:1.8.0_161] >> >> at java.lang.reflect.Method.invoke(Method.java:498) >> [rt.jar:1.8.0_161] >> >> at org.jboss.weld.util.reflection.Reflections.invokeAndUnwrap(Reflections.java:433) >> [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] >> >> at org.jboss.weld.bean.proxy.EnterpriseBeanProxyMethodHandler. >> invoke(EnterpriseBeanProxyMethodHandler.java:127) >> [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] >> >> at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invok >> e(EnterpriseTargetBeanInstance.java:56) [weld-core-impl-2.4.3.Final.ja >> r:2.4.3.Final] >> >> at org.jboss.weld.bean.proxy.InjectionPointPropagatingEnterpris >> eTargetBeanInstance.invoke(InjectionPointPropagatingEnterpriseTargetBeanInstance.java:67) >> [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] >> >> at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:100) >> [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] >> >> at org.ovirt.engine.core.bll.BackendCommandObjectsHandler$Backe >> ndInternal$BackendLocal$2049259618$Proxy$_$$_Weld$Enterprise >> Proxy$.runInternalAction(Unknown Source) [bll.jar:] >> >> at org.ovirt.engine.core.bll.VdsEventListener.refreshHostCapabi >> lities(VdsEventListener.java:598) [bll.jar:] >> >> at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$Subs >> criberRefreshingHostOnHostConnectionChangeEvent.onNext(HostConnectionRefresher.java:47) >> [vdsbroker.jar:] >> >> at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$Subs >> criberRefreshingHostOnHostConnectionChangeEvent.onNext(HostConnectionRefresher.java:30) >> [vdsbroker.jar:] >> >> at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$EventCal >> lable.call(EventPublisher.java:118) [vdsm-jsonrpc-java-client.jar:] >> >> at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$EventCal >> lable.call(EventPublisher.java:93) [vdsm-jsonrpc-java-client.jar:] >> >> at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424) >> [rt.jar:1.8.0_161] >> >> at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) >> [rt.jar:1.8.0_161] >> >> at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) >> [rt.jar:1.8.0_161] >> >> at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) >> [rt.jar:1.8.0_161] >> >> at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) >> [rt.jar:1.8.0_161] >> >> >> >> 2018-05-02 14:40:23,851-07 WARN [com.arjuna.ats.arjuna] >> (ForkJoinPool-1-worker-14) ARJUNA012077: Abort called on already aborted >> atomic action 0:ffff7f000001:-21bd8800:5ae90c48:10afa >> >> >> >> >> >> >> >> >> >> >> >> And the ENGINE log: >> >> >> >> 2018-05-02 14:40:23,851-07 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] >> (ForkJoinPool-1-worker-14) [52276df5] transaction rolled back >> >> 2018-05-02 14:40:23,851-07 ERROR [org.ovirt.engine.core.vdsbroker.VdsManager] >> (ForkJoinPool-1-worker-14) [52276df5] Unable to RefreshCapabilities: >> IllegalStateException: Transaction Local transaction >> (delegate=TransactionImple < ac, BasicAction: 0:ffff7f000001:-21bd8800:5ae90c48:10afa >> status: ActionStatus.ABORTED >, owner=Local transaction context for >> provider JBoss JTA transaction provider) is not active STATUS_ROLLEDBACK >> >> 2018-05-02 14:40:23,888-07 INFO [org.ovirt.engine.core.bll.Han >> dleVdsCpuFlagsOrClusterChangedCommand] (ForkJoinPool-1-worker-14) >> [5c511e51] Running command: HandleVdsCpuFlagsOrClusterChangedCommand >> internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 >> Type: VDS >> >> 2018-05-02 14:40:23,895-07 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >> (ForkJoinPool-1-worker-14) [2a0ec90b] Running command: >> HandleVdsVersionCommand internal: true. Entities affected : ID: >> 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS >> >> 2018-05-02 14:40:23,898-07 INFO [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] >> (ForkJoinPool-1-worker-14) [2a0ec90b] Refresh host capabilities finished. >> Lock released. Monitoring can run now for host 'ovnode102 from data-center >> 'Default' >> >> 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] >> (ForkJoinPool-1-worker-14) [2a0ec90b] Command 'org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand' >> failed: Could not get JDBC Connection; nested exception is >> java.sql.SQLException: javax.resource.ResourceException: IJ000457: >> Unchecked throwable in managedConnectionReconnected() >> cl=org.jboss.jca.core.connectionmanager.listener.TxConnectio >> nListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapt >> ers.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 >> lastReturned=1525297223847 lastValidated=1525290267811 >> lastCheckedOut=1525296923770 trackByTx=false >> pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 >> mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] >> xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 >> connectionManager=5bec70d2 warned=false currentXid=null >> productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] >> txSync=null] >> >> 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] >> (ForkJoinPool-1-worker-14) [2a0ec90b] Exception: >> org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get >> JDBC Connection; nested exception is java.sql.SQLException: >> javax.resource.ResourceException: IJ000457: Unchecked throwable in >> managedConnectionReconnected() cl=org.jboss.jca.core.connecti >> onmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed >> connection=org.jboss.jca.adapters.jdbc.local.LocalManagedCon >> nection at 3f37cf10 connection handles=0 lastReturned=1525297223847 >> lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false >> pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 >> mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] >> xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 >> connectionManager=5bec70d2 warned=false currentXid=null >> productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] >> txSync=null] >> >> at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80) >> [spring-jdbc.jar:4.3.9.RELEASE] >> >> at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) >> [spring-jdbc.jar:4.3.9.RELEASE] >> >> at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) >> [spring-jdbc.jar:4.3.9.RELEASE] >> >> at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) >> [spring-jdbc.jar:4.3.9.RELEASE] >> >> at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) >> [spring-jdbc.jar:4.3.9.RELEASE] >> >> at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$P >> ostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) >> [dal.jar:] >> >> . >> >> . >> >> . >> >> . >> >> 2018-05-02 14:40:23,907-07 ERROR [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-14) >> [2a0ec90b] EVENT_ID: HOST_REFRESH_CAPABILITIES_FAILED(607), Failed to >> refresh the capabilities of host ovnode102. >> >> 2018-05-02 14:40:23,907-07 INFO [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] >> (ForkJoinPool-1-worker-14) [2a0ec90b] Lock freed to object >> 'EngineLock:{exclusiveLocks='[74dfe965-cb11-495a-96a0-3dae6b3cbd75=VDS, >> HOST_NETWORK74dfe965-cb11-495a-96a0-3dae6b3cbd75=HOST_NETWORK]', >> sharedLocks=''}' >> >> 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core.vdsbrok >> er.vdsbroker.GetHardwareInfoAsyncVDSCommand] >> (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] START, >> GetHardwareInfoAsyncVDSCommand(HostName = ovnode102, >> VdsIdAndVdsVDSCommandParametersBase:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', >> vds='Host[ovnode102,74dfe965-cb11-495a-96a0-3dae6b3cbd75]'}), log id: >> 300f7345 >> >> 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core.vdsbrok >> er.vdsbroker.GetHardwareInfoAsyncVDSCommand] >> (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] FINISH, >> GetHardwareInfoAsyncVDSCommand, log id: 300f7345 >> >> 2018-05-02 14:40:25,802-07 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >> (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] Running >> command: SetNonOperationalVdsCommand internal: true. Entities affected : >> ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS >> >> 2018-05-02 14:40:25,805-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] START, >> SetVdsStatusVDSCommand(HostName = ovnode102., >> SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', >> status='NonOperational', nonOperationalReason='NETWORK_UNREACHABLE', >> stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 7611d8d8 >> >> 2018-05-02 14:40:56,722-07 INFO [org.ovirt.engine.core.bll.pro >> vider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) >> [33bdda7f] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ >> f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' >> >> 2018-05-02 14:40:56,732-07 INFO [org.ovirt.engine.core.bll.pro >> vider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) >> [33bdda7f] Running command: SyncNetworkProviderCommand internal: true. >> >> 2018-05-02 14:40:56,844-07 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] >> (default task-40) [] User admin at internal successfully logged in with >> scopes: ovirt-app-api ovirt-ext=token-info:authz-search >> ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate >> ovirt-ext=token:password-access >> >> 2018-05-02 14:40:57,001-07 INFO [org.ovirt.engine.core.bll.pro >> vider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) >> [33bdda7f] Lock freed to object 'EngineLock:{exclusiveLocks='[ >> f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' >> >> 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.uti >> ls.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) >> [] Thread pool 'default' is using 0 threads out of 1 and 5 tasks are >> waiting in the queue. >> >> 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.uti >> ls.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) >> [] Thread pool 'engine' is using 0 threads out of 500, 8 threads waiting >> for tasks and 0 tasks in queue. >> >> 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.uti >> ls.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) >> [] Thread pool 'engineScheduled' is using 1 threads out of 100 and 99 tasks >> are waiting in the queue. >> >> 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.uti >> ls.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) >> [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1 and 0 >> tasks are waiting in the queue. >> >> 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.uti >> ls.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) >> [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5 and 1 tasks >> are waiting in the queue. >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> *From:* Yanir Quinn [mailto:yquinn at redhat.com] >> *Sent:* Wednesday, May 2, 2018 12:34 AM >> *To:* Justin Zygmont >> *Cc:* users at ovirt.org >> *Subject:* Re: [ovirt-users] adding a host >> >> >> >> Hi, >> >> What document are you using ? >> >> See if you find the needed information here : >> https://ovirt.org/documentation/admin-guide/chap-Hosts/ >> >> >> >> >> For engine related potential errors i recommend also checking the >> engine.log and in UI check the events section. >> >> Regards, >> >> Yanir Quinn >> >> >> >> On Tue, May 1, 2018 at 11:11 PM, Justin Zygmont >> wrote: >> >> I have tried to add a host to the engine and it just takes forever never >> working or giving any error message. When I look in the engine?s >> server.log I see it says the networks are missing. >> >> I thought when you install a node and add it to the engine it will add >> the networks automatically? The docs don?t give much information about >> this, and I can?t even remove the host through the UI. What steps are >> required to prepare a node when several vlans are involved? >> >> >> >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> >> >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehaas at redhat.com Sun May 6 14:34:19 2018 From: ehaas at redhat.com (Edward Haas) Date: Sun, 6 May 2018 17:34:19 +0300 Subject: [ovirt-users] routing In-Reply-To: References: Message-ID: Not sure if I understand what you are asking here, but the need for a gateway per network has emerged from the need to support other host networks (not VM networks) beside the management one. As an example, migration and storage networks can be defined, each passing dedicated traffic (one for storage communication and another for VM migration traffic), they may need to pass through different gateways. So the management network can be accessed using gateway A, storage using B and migration using C. A will usually be set on a host level as the host default gateway, and the others will be set for the individual networks. Otherwise, how would you expect storage to use a different router (than the management one) in the network? Thanks, Edy. On Thu, May 3, 2018 at 1:08 AM, Justin Zygmont wrote: > I don?t understand why you would want this unless the ovirtnode itself was > actually the router, wouldn?t you want to only have an IP on the management > network, and leave the rest of the VLANS blank so they depend on the router > to route the traffic: > > > > NIC1 -> ovirt-mgmt - gateway set > > NIC2 -> VLAN3, VLAN4, etc? > > > > > > https://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks/ > > > > *Viewing or Editing the Gateway for a Logical Network* > > Users can define the gateway, along with the IP address and subnet mask, > for a logical network. This is necessary when multiple networks exist on a > host and traffic should be routed through the specified network, rather > than the default gateway. > > If multiple networks exist on a host and the gateways are not defined, > return traffic will be routed through the default gateway, which may not > reach the intended destination. This would result in users being unable to > ping the host. > > oVirt handles multiple gateways automatically whenever an interface goes > up or down. > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Sun May 6 14:38:14 2018 From: frolland at redhat.com (Fred Rolland) Date: Sun, 6 May 2018 17:38:14 +0300 Subject: [ovirt-users] Re-attaching ISOs and moving ISOs storage In-Reply-To: References: <288C5B8E-D9BC-48E5-A130-EA2F9FA9E942@well.ox.ac.uk> Message-ID: I am trying to reproduce this on my setup, with no success for now. Can you share the engine log corresponding to the time you tried to detach? What are the options available currently? I understand that "Activate" does not work, what about "Detach"? Can your hosts access the ISO domain? On Fri, May 4, 2018 at 2:09 PM, Callum Smith wrote: > Is there any sensible way to either clean up the existing ISOs storage or > re-attach it? I'm struggling to even export VMs and migrate them elsewhere > with this and need to recover them asap. > > Regards, > Callum > > -- > > Callum Smith > Research Computing Core > Wellcome Trust Centre for Human Genetics > University of Oxford > e. callum at well.ox.ac.uk > > On 2 May 2018, at 15:09, Callum Smith wrote: > > Attached, thank you for looking into this > > > https://*HOSTNAME*/ovirt-engine/api/v4/storagedomains/f5914df0-f46c-4cc0-b666-c929aa0225ae > > VMISOs 11770357874688 false 0 5 false ok false
backoffice01.cluster
/vm-iso nfs
v1 false false iso 38654705664 10 false
> > > https://*HOSTNAME*/ovirt-engine/api/v4/datacenters/5a54bf81- > 0228-02bc-0358-000000000304/storagedomains > > tegile-virtman-backup 17519171600384 false 0 5 false ok false maintenance
192.168.64.248
auto /export/virtman/backup nfs
v1 false false export 8589934592 10 false
VMStorage 11770357874688 false 118111600640 5 false ok true active
backoffice01.cluster
/vm-storage2 nfs
v4 false false data 38654705664 10 false
tegile-virtman 2190433320960 false 226559524864 5 false ok false active
192.168.64.248
auto /export/virtman/VirtualServerShare_1 nfs
v4 false false data 8589934592 10 false
VMISOs 11770357874688 false 0 5 false ok false maintenance
backoffice01.cluster
/vm-iso nfs
v1 false false iso 38654705664 10 false
> > > > Regards, > Callum > > -- > > Callum Smith > Research Computing Core > Wellcome Trust Centre for Human Genetics > University of Oxford > e. callum at well.ox.ac.uk > > On 2 May 2018, at 14:46, Fred Rolland wrote: > > Can you share the REST API data of the Storage domain and Data Center? > Here an example of the URLs, you will need to replace with correct ids. > > http://MY-SERVER/ovirt-engine/api/v4/storagedomains/ > 13461356-f6f7-4a58-9897-2fac61ff40af > > > http://MY-SERVER/ovirt-engine/api/v4/datacenters/5a5df553- > 022d-036d-01e8-000000000071/storagedomains > > > > > On Wed, May 2, 2018 at 12:53 PM, Callum Smith > wrote: > >> This is on 4.2.0.2-1, I've linked the main logs to dropbox simply because >> they're big, full of noise right now. >> https://www.dropbox.com/s/f8q3m5amro2a1b2/engine.log?dl=0 >> https://www.dropbox.com/s/uods85jk65halo3/vdsm.log?dl=0 >> >> Regards, >> Callum >> >> -- >> >> Callum Smith >> Research Computing Core >> Wellcome Trust Centre for Human Genetics >> University of Oxford >> e. callum at well.ox.ac.uk >> >> On 2 May 2018, at 10:43, Fred Rolland wrote: >> >> Which version are you using? >> Can you provide the whole log? >> >> For some reason, it looks like the Vdsm thinks that the Storage Domain is >> not part of the pool. >> >> On Wed, May 2, 2018 at 11:20 AM, Callum Smith >> wrote: >> >>> State is maintenance for the ISOs storage. I've extracted what is >>> hopefully the relevant bits of the log. >>> >>> VDSM.log (SPM) >>> >>> 2018-05-02 09:16:03,455+0100 INFO (ioprocess communication (179084)) >>> [IOProcess] Starting ioprocess (__init__:447) >>> 2018-05-02 09:16:03,456+0100 INFO (ioprocess communication (179091)) >>> [IOProcess] Starting ioprocess (__init__:447) >>> 2018-05-02 09:16:03,461+0100 INFO (jsonrpc/0) [vdsm.api] FINISH >>> activateStorageDomain error=Storage domain not in pool: >>> u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, >>> pool=5a54bf81-0228-02bc-0358-000000000304' >>> from=::ffff:192.168.64.254,58968, flow_id=93433989-8e26-48a9-bd3a-2ab95f296c08, >>> task_id=7f21f911-348f-45a3-b79c-e3cb11642035 (api:50) >>> 2018-05-02 09:16:03,461+0100 ERROR (jsonrpc/0) >>> [storage.TaskManager.Task] (Task='7f21f911-348f-45a3-b79c-e3cb11642035') >>> Unexpected error (task:875) >>> Traceback (most recent call last): >>> File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line >>> 882, in _run >>> return fn(*args, **kargs) >>> File "", line 2, in activateStorageDomain >>> File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, >>> in method >>> ret = func(*args, **kwargs) >>> File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line >>> 1256, in activateStorageDomain >>> pool.activateSD(sdUUID) >>> File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", >>> line 79, in wrapper >>> return method(self, *args, **kwargs) >>> File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line >>> 1130, in activateSD >>> self.validateAttachedDomain(dom) >>> File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", >>> line 79, in wrapper >>> return method(self, *args, **kwargs) >>> File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 557, >>> in validateAttachedDomain >>> raise se.StorageDomainNotInPool(self.spUUID, dom.sdUUID) >>> StorageDomainNotInPool: Storage domain not in pool: >>> u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, >>> pool=5a54bf81-0228-02bc-0358-000000000304' >>> 2018-05-02 09:16:03,461+0100 INFO (jsonrpc/0) >>> [storage.TaskManager.Task] (Task='7f21f911-348f-45a3-b79c-e3cb11642035') >>> aborting: Task is aborted: "Storage domain not in pool: >>> u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, >>> pool=5a54bf81-0228-02bc-0358-000000000304'" - code 353 (task:1181) >>> 2018-05-02 09:16:03,462+0100 ERROR (jsonrpc/0) [storage.Dispatcher] >>> FINISH activateStorageDomain error=Storage domain not in pool: >>> u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, >>> pool=5a54bf81-0228-02bc-0358-000000000304' (dispatcher:82) >>> >>> engine.log >>> >>> 2018-05-02 09:16:02,326+01 INFO [org.ovirt.engine.core.bll.st >>> orage.domain.ActivateStorageDomainCommand] (default task-20) >>> [93433989-8e26-48a9-bd3a-2ab95f296c08] Lock Acquired to object >>> 'EngineLock:{exclusiveLocks='[f5914df0-f46c-4cc0-b666-c929aa0225ae=STORAGE]', >>> sharedLocks=''}' >>> 2018-05-02 09:16:02,376+01 INFO [org.ovirt.engine.core.bll.st >>> orage.domain.ActivateStorageDomainCommand] >>> (EE-ManagedThreadFactory-engine-Thread-33455) >>> [93433989-8e26-48a9-bd3a-2ab95f296c08] Running command: >>> ActivateStorageDomainCommand internal: false. Entities affected : ID: >>> f5914df0-f46c-4cc0-b666-c929aa0225ae Type: StorageAction group >>> MANIPULATE_STORAGE_DOMA >>> IN with role type ADMIN >>> 2018-05-02 09:16:02,385+01 INFO [org.ovirt.engine.core.bll.st >>> orage.domain.ActivateStorageDomainCommand] >>> (EE-ManagedThreadFactory-engine-Thread-33455) >>> [93433989-8e26-48a9-bd3a-2ab95f296c08] Lock freed to object >>> 'EngineLock:{exclusiveLocks='[f5914df0-f46c-4cc0-b666-c929aa0225ae=STORAGE]', >>> sharedLocks=''}' >>> 2018-05-02 09:16:02,385+01 INFO [org.ovirt.engine.core.bll.st >>> orage.domain.ActivateStorageDomainCommand] >>> (EE-ManagedThreadFactory-engine-Thread-33455) >>> [93433989-8e26-48a9-bd3a-2ab95f296c08] ActivateStorage Domain. Before >>> Connect all hosts to pool. Time: Wed May 02 09:16:02 BST 2018 >>> 2018-05-02 09:16:02,407+01 INFO [org.ovirt.engine.core.bll.st >>> orage.connection.ConnectStorageToVdsCommand] >>> (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] Running >>> command: ConnectStorageToVdsCommand internal: true. Entities affected : >>> ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group >>> CREATE_STORAGE_DOMAIN with role type ADMIN >>> 2018-05-02 09:16:02,421+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.ConnectStorageServerVDSCommand] >>> (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] START, >>> ConnectStorageServerVDSCommand(HostName = virtA003, >>> StorageServerConnectionManagementVDSParameters:{hostId='fe28 >>> 61fc-2b47-4807-b054-470198eda473', storagePoolId='00000000-0000-0 >>> 000-0000-000000 >>> 000000', storageType='NFS', connectionList='[StorageServer >>> Connections:{id='da392861-aedc-4f1e-97f4-6919fb01f1e9', >>> connection='backoffice01.cluster:/vm-iso', iqn='null', vfsType='null', >>> mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', >>> iface='null', netIfaceName='null'}]'}), log id: 23ce648f >>> 2018-05-02 09:16:02,446+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.ConnectStorageServerVDSCommand] >>> (EE-ManagedThreadFactory-engine-Thread-33456) [40a82b47] FINISH, >>> ConnectStorageServerVDSCommand, return: {da392861-aedc-4f1e-97f4-6919fb01f1e9=0}, >>> log id: 23ce648f >>> 2018-05-02 09:16:02,450+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.irsbroker.ActivateStorageDomainVDSCommand] >>> (EE-ManagedThreadFactory-engine-Thread-33455) >>> [93433989-8e26-48a9-bd3a-2ab95f296c08] START, >>> ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSComman >>> dParameters:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', >>> ignoreFailoverLimit='false', stor >>> ageDomainId='f5914df0-f46c-4cc0-b666-c929aa0225ae'}), log id: 5c864594 >>> 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.dal.dbb >>> roker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33455) >>> [93433989-8e26-48a9-bd3a-2ab95f296c08] EVENT_ID: >>> IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command >>> ActivateStorageDomainVDS failed: Storage domain not in pool: >>> u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, pool=5a5 >>> 4bf81-0228-02bc-0358-000000000304' >>> 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.vdsbrok >>> er.irsbroker.ActivateStorageDomainVDSCommand] >>> (EE-ManagedThreadFactory-engine-Thread-33455) >>> [93433989-8e26-48a9-bd3a-2ab95f296c08] Command >>> 'ActivateStorageDomainVDSCommand( ActivateStorageDomainVDSComman >>> dParameters:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', >>> ignoreFailoverLimit='false', st >>> orageDomainId='f5914df0-f46c-4cc0-b666-c929aa0225ae'})' execution >>> failed: IRSGenericException: IRSErrorException: Storage domain not in pool: >>> u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, >>> pool=5a54bf81-0228-02bc-0358-000000000304' >>> 2018-05-02 09:16:02,635+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.irsbroker.ActivateStorageDomainVDSCommand] >>> (EE-ManagedThreadFactory-engine-Thread-33455) >>> [93433989-8e26-48a9-bd3a-2ab95f296c08] FINISH, >>> ActivateStorageDomainVDSCommand, log id: 5c864594 >>> 2018-05-02 09:16:02,635+01 ERROR [org.ovirt.engine.core.bll.sto >>> rage.domain.ActivateStorageDomainCommand] (EE-ManagedThreadFactory-engine-Thread-33455) >>> [93433989-8e26-48a9-bd3a-2ab95f296c08] Command >>> 'org.ovirt.engine.core.bll.storage.domain.ActivateStorageDomainCommand' >>> failed: EngineException: org.ovirt.engine.core.vdsbroke >>> r.irsbroker.IrsOperationFailedNoFailove >>> rException: IRSGenericException: IRSErrorException: Storage domain not >>> in pool: u'domain=f5914df0-f46c-4cc0-b666-c929aa0225ae, >>> pool=5a54bf81-0228-02bc-0358-000000000304' (Failed with error >>> StorageDomainNotInPool and code 353) >>> 2018-05-02 09:16:02,636+01 INFO [org.ovirt.engine.core.bll.st >>> orage.domain.ActivateStorageDomainCommand] >>> (EE-ManagedThreadFactory-engine-Thread-33455) >>> [93433989-8e26-48a9-bd3a-2ab95f296c08] Command >>> [id=22b0f3c1-9a09-4e26-8096-d83465c8f4ee]: Compensating >>> CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.b >>> usinessentities.StoragePoolIsoMap; snapshot: EntityStatus >>> Snapshot:{id='StoragePoolIsoMapId:{storagePoolId='5a54bf81-0228-02bc-0358-000000000304', >>> storageId='f5914df0-f46c-4cc0-b666-c929aa0225ae'}', >>> status='Maintenance'}. >>> 2018-05-02 09:16:02,660+01 ERROR [org.ovirt.engine.core.dal.dbb >>> roker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33455) >>> [93433989-8e26-48a9-bd3a-2ab95f296c08] EVENT_ID: >>> USER_ACTIVATE_STORAGE_DOMAIN_FAILED(967), Failed to activate Storage >>> Domain VMISOs (Data Center Default) by admin at internal-authz >>> >>> Regards, >>> Callum >>> >>> -- >>> >>> Callum Smith >>> Research Computing Core >>> Wellcome Trust Centre for Human Genetics >>> University of Oxford >>> e. callum at well.ox.ac.uk >>> >>> On 2 May 2018, at 08:44, Fred Rolland wrote: >>> >>> Hi, >>> >>> Can you provide logs from engine and Vdsm(SPM)? >>> What is the state now? >>> >>> Thanks, >>> Fred >>> >>> On Tue, May 1, 2018 at 4:11 PM, Callum Smith >>> wrote: >>> >>>> Dear All, >>>> >>>> It appears that clicking "detach" on the ISO storage domain is a really >>>> bad idea. This has gotten half way through the procedure and now can't be >>>> recovered from. Is there any advice for re-attaching the ISO storage domain >>>> manually? An NFS mount didn't add it back to the "pool" unfortunately. >>>> >>>> On a separate note, is it possible to migrate this storage to a new >>>> location? And if so how. >>>> >>>> Regards, >>>> Callum >>>> >>>> -- >>>> >>>> Callum Smith >>>> Research Computing Core >>>> Wellcome Trust Centre for Human Genetics >>>> University of Oxford >>>> e. callum at well.ox.ac.uk >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >>> >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jzygmont at proofpoint.com Mon May 7 04:29:24 2018 From: jzygmont at proofpoint.com (Justin Zygmont) Date: Mon, 7 May 2018 04:29:24 +0000 Subject: [ovirt-users] adding a host In-Reply-To: References: Message-ID: That doesn?t work, I even have the server powered down but the Remove button is always dimmed, I tried everything I can think of. Error while executing action: Cannot perform confirm 'Host has been rebooted'. Another power management action is already in progress. And I?m not even using power management. From: Yanir Quinn [mailto:yquinn at redhat.com] Sent: Sunday, May 6, 2018 5:50 AM To: Justin Zygmont Cc: users at ovirt.org Subject: Re: [ovirt-users] adding a host For removing the non operational host : 1.Right click on the host name 2.Click on "Confirm host has been rebooted" 3.Remove the host For the issue you are experiencing with host addition, according to the engine logs you have sent, you might need to perform a few steps , see: https://bugzilla.redhat.com/show_bug.cgi?id=1516256#c2 I would also recommend to check the the host's network is not down. Also, during installation of the host,observe the messages in the Events section (UI) Hope this helps. On Thu, May 3, 2018 at 10:07 PM, Justin Zygmont > wrote: I can?t seem to do anything to control the host from the engine, when I select it for Maint, the engine log shows: [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] (default task-50) [90ba81ef-21e4-4272-8c59-84786e969ff7] Running command: MaintenanceNumberOfVdssCommand internal: false. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2018-05-03 12:00:37,918-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (default task-50) [90ba81ef-21e4-4272-8c59-84786e969ff7] START, SetVdsStatusVDSCommand(HostName = ovnode102, SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', status='PreparingForMaintenance', nonOperationalReason='NONE', stopSpmFailureLogged='true', maintenanceReason='null'}), log id: 647d5f78 I have only 1 host in the DC, status is Up, the cluster says host count is 2 even though the second host stays Non Operational. I don?t know how to remove it. I just installed and tried to join the DC, this is a fresh installation, the engine was launched through cockpit. Heres what nodectl shows from the host: ovnode102 ~]# nodectl check Status: OK Bootloader ... OK Layer boot entries ... OK Valid boot entries ... OK Mount points ... OK Separate /var ... OK Discard is used ... OK Basic storage ... OK Initialized VG ... OK Initialized Thin Pool ... OK Initialized LVs ... OK Thin storage ... OK Checking available space in thinpool ... OK Checking thinpool auto-extend ... OK vdsmd ... OK Thanks, From: Yanir Quinn [mailto:yquinn at redhat.com] Sent: Thursday, May 3, 2018 1:19 AM To: Justin Zygmont > Cc: users at ovirt.org Subject: Re: [ovirt-users] adding a host Did you try switching the host to maintenance mode first ? What is the state of the data center and how many active hosts do you have now? And did you perform any updates recently or just run a fresh installation ? if so , did you run engine-setup before launching engine ? On Thu, May 3, 2018 at 12:47 AM, Justin Zygmont > wrote: I read this page and it doesn?t help since this is a host that can?t be removed, the ?remove? button is dimmed out. This is 4.22 ovirt node, but the host stays in a ?non operational? state. I notice the logs have a lot of errors, for example: the SERVER log: 2018-05-02 14:40:23,847-07 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (ForkJoinPool-1-worker-14) IJ000609: Attempt to return connection twice: org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null]: java.lang.Throwable: STACKTRACE at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:722) at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:611) at org.jboss.jca.core.connectionmanager.pool.AbstractPool.returnConnection(AbstractPool.java:847) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.returnManagedConnection(AbstractConnectionManager.java:725) at org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl.managedConnectionDisconnected(TxConnectionManagerImpl.java:585) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.disconnectManagedConnection(AbstractConnectionManager.java:988) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.reconnectManagedConnection(AbstractConnectionManager.java:974) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:792) at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:138) at org.jboss.as.connector.subsystems.datasources.WildFlyDataSource.getConnection(WildFlyDataSource.java:64) at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:77) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.doExecute(PostgresDbEngineDialect.java:118) [dal.jar:] at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:135) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:105) [dal.jar:] at org.ovirt.engine.core.dao.VmDynamicDaoImpl.getAllRunningForVds(VmDynamicDaoImpl.java:52) [dal.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.isVmRunningOnHost(HostNetworkTopologyPersisterImpl.java:210) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.logChangedDisplayNetwork(HostNetworkTopologyPersisterImpl.java:179) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.auditNetworkCompliance(HostNetworkTopologyPersisterImpl.java:148) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.lambda$persistAndEnforceNetworkCompliance$0(HostNetworkTopologyPersisterImpl.java:100) [vdsbroker.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:202) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInRequired(TransactionSupport.java:137) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:105) [utils.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance(HostNetworkTopologyPersisterImpl.java:93) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance(HostNetworkTopologyPersisterImpl.java:154) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.processRefreshCapabilitiesResponse(VdsManager.java:794) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.handleRefreshCapabilitiesResponse(VdsManager.java:598) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.refreshHostSync(VdsManager.java:567) [vdsbroker.jar:] at org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand.executeCommand(RefreshHostCapabilitiesCommand.java:41) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1133) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1285) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1934) [bll.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103) [utils.jar:] at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1345) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:400) [bll.jar:] at org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:468) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:450) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:393) [bll.jar:] at sun.reflect.GeneratedMethodAccessor158.invoke(Unknown Source) [:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161] at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:78) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:88) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:101) at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) [wildfly-ee-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:40) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53) at org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:264) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:379) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:244) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509) at org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438) at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:609) at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:57) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53) at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198) at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185) at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:81) at org.ovirt.engine.core.bll.interfaces.BackendInternal$$$view4.runInternalAction(Unknown Source) [bll.jar:] at sun.reflect.GeneratedMethodAccessor157.invoke(Unknown Source) [:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161] at org.jboss.weld.util.reflection.Reflections.invokeAndUnwrap(Reflections.java:433) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.EnterpriseBeanProxyMethodHandler.invoke(EnterpriseBeanProxyMethodHandler.java:127) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke(EnterpriseTargetBeanInstance.java:56) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.InjectionPointPropagatingEnterpriseTargetBeanInstance.invoke(InjectionPointPropagatingEnterpriseTargetBeanInstance.java:67) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:100) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.ovirt.engine.core.bll.BackendCommandObjectsHandler$BackendInternal$BackendLocal$2049259618$Proxy$_$$_Weld$EnterpriseProxy$.runInternalAction(Unknown Source) [bll.jar:] at org.ovirt.engine.core.bll.VdsEventListener.refreshHostCapabilities(VdsEventListener.java:598) [bll.jar:] at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext(HostConnectionRefresher.java:47) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext(HostConnectionRefresher.java:30) [vdsbroker.jar:] at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$EventCallable.call(EventPublisher.java:118) [vdsm-jsonrpc-java-client.jar:] at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$EventCallable.call(EventPublisher.java:93) [vdsm-jsonrpc-java-client.jar:] at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) [rt.jar:1.8.0_161] 2018-05-02 14:40:23,851-07 WARN [com.arjuna.ats.arjuna] (ForkJoinPool-1-worker-14) ARJUNA012077: Abort called on already aborted atomic action 0:ffff7f000001:-21bd8800:5ae90c48:10afa And the ENGINE log: 2018-05-02 14:40:23,851-07 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (ForkJoinPool-1-worker-14) [52276df5] transaction rolled back 2018-05-02 14:40:23,851-07 ERROR [org.ovirt.engine.core.vdsbroker.VdsManager] (ForkJoinPool-1-worker-14) [52276df5] Unable to RefreshCapabilities: IllegalStateException: Transaction Local transaction (delegate=TransactionImple < ac, BasicAction: 0:ffff7f000001:-21bd8800:5ae90c48:10afa status: ActionStatus.ABORTED >, owner=Local transaction context for provider JBoss JTA transaction provider) is not active STATUS_ROLLEDBACK 2018-05-02 14:40:23,888-07 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (ForkJoinPool-1-worker-14) [5c511e51] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:23,895-07 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:23,898-07 INFO [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Refresh host capabilities finished. Lock released. Monitoring can run now for host 'ovnode102 from data-center 'Default' 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Command 'org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand' failed: Could not get JDBC Connection; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000457: Unchecked throwable in managedConnectionReconnected() cl=org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null] 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Exception: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000457: Unchecked throwable in managedConnectionReconnected() cl=org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null] at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) [dal.jar:] . . . . 2018-05-02 14:40:23,907-07 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-14) [2a0ec90b] EVENT_ID: HOST_REFRESH_CAPABILITIES_FAILED(607), Failed to refresh the capabilities of host ovnode102. 2018-05-02 14:40:23,907-07 INFO [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Lock freed to object 'EngineLock:{exclusiveLocks='[74dfe965-cb11-495a-96a0-3dae6b3cbd75=VDS, HOST_NETWORK74dfe965-cb11-495a-96a0-3dae6b3cbd75=HOST_NETWORK]', sharedLocks=''}' 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] START, GetHardwareInfoAsyncVDSCommand(HostName = ovnode102, VdsIdAndVdsVDSCommandParametersBase:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', vds='Host[ovnode102,74dfe965-cb11-495a-96a0-3dae6b3cbd75]'}), log id: 300f7345 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] FINISH, GetHardwareInfoAsyncVDSCommand, log id: 300f7345 2018-05-02 14:40:25,802-07 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:25,805-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] START, SetVdsStatusVDSCommand(HostName = ovnode102., SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', status='NonOperational', nonOperationalReason='NETWORK_UNREACHABLE', stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 7611d8d8 2018-05-02 14:40:56,722-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' 2018-05-02 14:40:56,732-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Running command: SyncNetworkProviderCommand internal: true. 2018-05-02 14:40:56,844-07 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-40) [] User admin at internal successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2018-05-02 14:40:57,001-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Lock freed to object 'EngineLock:{exclusiveLocks='[f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'default' is using 0 threads out of 1 and 5 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 8 threads waiting for tasks and 0 tasks in queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineScheduled' is using 1 threads out of 100 and 99 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1 and 0 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5 and 1 tasks are waiting in the queue. From: Yanir Quinn [mailto:yquinn at redhat.com] Sent: Wednesday, May 2, 2018 12:34 AM To: Justin Zygmont > Cc: users at ovirt.org Subject: Re: [ovirt-users] adding a host Hi, What document are you using ? See if you find the needed information here : https://ovirt.org/documentation/admin-guide/chap-Hosts/ For engine related potential errors i recommend also checking the engine.log and in UI check the events section. Regards, Yanir Quinn On Tue, May 1, 2018 at 11:11 PM, Justin Zygmont > wrote: I have tried to add a host to the engine and it just takes forever never working or giving any error message. When I look in the engine?s server.log I see it says the networks are missing. I thought when you install a node and add it to the engine it will add the networks automatically? The docs don?t give much information about this, and I can?t even remove the host through the UI. What steps are required to prepare a node when several vlans are involved? _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From jzygmont at proofpoint.com Mon May 7 04:30:20 2018 From: jzygmont at proofpoint.com (Justin Zygmont) Date: Mon, 7 May 2018 04:30:20 +0000 Subject: [ovirt-users] adding a host In-Reply-To: References: Message-ID: I?ve deselect the required status, I will try to add it again if I am ever able to remove the old host. From: Alona Kaplan [mailto:alkaplan at redhat.com] Sent: Sunday, May 6, 2018 7:31 AM To: Yanir Quinn Cc: Justin Zygmont ; users at ovirt.org Subject: Re: [ovirt-users] adding a host There was a bug when adding a host to a cluster that contains a required network. It was fixed in 4.2.3.4. Bug-Url- https://bugzilla.redhat.com/1570388 Thanks, Alona. On Sun, May 6, 2018 at 3:49 PM, Yanir Quinn > wrote: For removing the non operational host : 1.Right click on the host name 2.Click on "Confirm host has been rebooted" 3.Remove the host For the issue you are experiencing with host addition, according to the engine logs you have sent, you might need to perform a few steps , see: https://bugzilla.redhat.com/show_bug.cgi?id=1516256#c2 I would also recommend to check the the host's network is not down. Also, during installation of the host,observe the messages in the Events section (UI) Hope this helps. On Thu, May 3, 2018 at 10:07 PM, Justin Zygmont > wrote: I can?t seem to do anything to control the host from the engine, when I select it for Maint, the engine log shows: [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] (default task-50) [90ba81ef-21e4-4272-8c59-84786e969ff7] Running command: MaintenanceNumberOfVdssCommand internal: false. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2018-05-03 12:00:37,918-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (default task-50) [90ba81ef-21e4-4272-8c59-84786e969ff7] START, SetVdsStatusVDSCommand(HostName = ovnode102, SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', status='PreparingForMaintenance', nonOperationalReason='NONE', stopSpmFailureLogged='true', maintenanceReason='null'}), log id: 647d5f78 I have only 1 host in the DC, status is Up, the cluster says host count is 2 even though the second host stays Non Operational. I don?t know how to remove it. I just installed and tried to join the DC, this is a fresh installation, the engine was launched through cockpit. Heres what nodectl shows from the host: ovnode102 ~]# nodectl check Status: OK Bootloader ... OK Layer boot entries ... OK Valid boot entries ... OK Mount points ... OK Separate /var ... OK Discard is used ... OK Basic storage ... OK Initialized VG ... OK Initialized Thin Pool ... OK Initialized LVs ... OK Thin storage ... OK Checking available space in thinpool ... OK Checking thinpool auto-extend ... OK vdsmd ... OK Thanks, From: Yanir Quinn [mailto:yquinn at redhat.com] Sent: Thursday, May 3, 2018 1:19 AM To: Justin Zygmont > Cc: users at ovirt.org Subject: Re: [ovirt-users] adding a host Did you try switching the host to maintenance mode first ? What is the state of the data center and how many active hosts do you have now? And did you perform any updates recently or just run a fresh installation ? if so , did you run engine-setup before launching engine ? On Thu, May 3, 2018 at 12:47 AM, Justin Zygmont > wrote: I read this page and it doesn?t help since this is a host that can?t be removed, the ?remove? button is dimmed out. This is 4.22 ovirt node, but the host stays in a ?non operational? state. I notice the logs have a lot of errors, for example: the SERVER log: 2018-05-02 14:40:23,847-07 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (ForkJoinPool-1-worker-14) IJ000609: Attempt to return connection twice: org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null]: java.lang.Throwable: STACKTRACE at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:722) at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:611) at org.jboss.jca.core.connectionmanager.pool.AbstractPool.returnConnection(AbstractPool.java:847) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.returnManagedConnection(AbstractConnectionManager.java:725) at org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl.managedConnectionDisconnected(TxConnectionManagerImpl.java:585) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.disconnectManagedConnection(AbstractConnectionManager.java:988) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.reconnectManagedConnection(AbstractConnectionManager.java:974) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:792) at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:138) at org.jboss.as.connector.subsystems.datasources.WildFlyDataSource.getConnection(WildFlyDataSource.java:64) at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:77) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.doExecute(PostgresDbEngineDialect.java:118) [dal.jar:] at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:135) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:105) [dal.jar:] at org.ovirt.engine.core.dao.VmDynamicDaoImpl.getAllRunningForVds(VmDynamicDaoImpl.java:52) [dal.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.isVmRunningOnHost(HostNetworkTopologyPersisterImpl.java:210) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.logChangedDisplayNetwork(HostNetworkTopologyPersisterImpl.java:179) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.auditNetworkCompliance(HostNetworkTopologyPersisterImpl.java:148) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.lambda$persistAndEnforceNetworkCompliance$0(HostNetworkTopologyPersisterImpl.java:100) [vdsbroker.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:202) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInRequired(TransactionSupport.java:137) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:105) [utils.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance(HostNetworkTopologyPersisterImpl.java:93) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance(HostNetworkTopologyPersisterImpl.java:154) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.processRefreshCapabilitiesResponse(VdsManager.java:794) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.handleRefreshCapabilitiesResponse(VdsManager.java:598) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.refreshHostSync(VdsManager.java:567) [vdsbroker.jar:] at org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand.executeCommand(RefreshHostCapabilitiesCommand.java:41) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1133) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1285) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1934) [bll.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103) [utils.jar:] at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1345) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:400) [bll.jar:] at org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:468) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:450) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:393) [bll.jar:] at sun.reflect.GeneratedMethodAccessor158.invoke(Unknown Source) [:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161] at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:78) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:88) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:101) at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) [wildfly-ee-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:40) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53) at org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:264) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:379) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:244) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509) at org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438) at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:609) at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:57) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53) at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198) at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185) at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:81) at org.ovirt.engine.core.bll.interfaces.BackendInternal$$$view4.runInternalAction(Unknown Source) [bll.jar:] at sun.reflect.GeneratedMethodAccessor157.invoke(Unknown Source) [:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161] at org.jboss.weld.util.reflection.Reflections.invokeAndUnwrap(Reflections.java:433) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.EnterpriseBeanProxyMethodHandler.invoke(EnterpriseBeanProxyMethodHandler.java:127) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke(EnterpriseTargetBeanInstance.java:56) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.InjectionPointPropagatingEnterpriseTargetBeanInstance.invoke(InjectionPointPropagatingEnterpriseTargetBeanInstance.java:67) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:100) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.ovirt.engine.core.bll.BackendCommandObjectsHandler$BackendInternal$BackendLocal$2049259618$Proxy$_$$_Weld$EnterpriseProxy$.runInternalAction(Unknown Source) [bll.jar:] at org.ovirt.engine.core.bll.VdsEventListener.refreshHostCapabilities(VdsEventListener.java:598) [bll.jar:] at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext(HostConnectionRefresher.java:47) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext(HostConnectionRefresher.java:30) [vdsbroker.jar:] at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$EventCallable.call(EventPublisher.java:118) [vdsm-jsonrpc-java-client.jar:] at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$EventCallable.call(EventPublisher.java:93) [vdsm-jsonrpc-java-client.jar:] at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) [rt.jar:1.8.0_161] 2018-05-02 14:40:23,851-07 WARN [com.arjuna.ats.arjuna] (ForkJoinPool-1-worker-14) ARJUNA012077: Abort called on already aborted atomic action 0:ffff7f000001:-21bd8800:5ae90c48:10afa And the ENGINE log: 2018-05-02 14:40:23,851-07 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (ForkJoinPool-1-worker-14) [52276df5] transaction rolled back 2018-05-02 14:40:23,851-07 ERROR [org.ovirt.engine.core.vdsbroker.VdsManager] (ForkJoinPool-1-worker-14) [52276df5] Unable to RefreshCapabilities: IllegalStateException: Transaction Local transaction (delegate=TransactionImple < ac, BasicAction: 0:ffff7f000001:-21bd8800:5ae90c48:10afa status: ActionStatus.ABORTED >, owner=Local transaction context for provider JBoss JTA transaction provider) is not active STATUS_ROLLEDBACK 2018-05-02 14:40:23,888-07 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (ForkJoinPool-1-worker-14) [5c511e51] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:23,895-07 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:23,898-07 INFO [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Refresh host capabilities finished. Lock released. Monitoring can run now for host 'ovnode102 from data-center 'Default' 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Command 'org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand' failed: Could not get JDBC Connection; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000457: Unchecked throwable in managedConnectionReconnected() cl=org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null] 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Exception: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000457: Unchecked throwable in managedConnectionReconnected() cl=org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null] at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) [dal.jar:] . . . . 2018-05-02 14:40:23,907-07 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-14) [2a0ec90b] EVENT_ID: HOST_REFRESH_CAPABILITIES_FAILED(607), Failed to refresh the capabilities of host ovnode102. 2018-05-02 14:40:23,907-07 INFO [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Lock freed to object 'EngineLock:{exclusiveLocks='[74dfe965-cb11-495a-96a0-3dae6b3cbd75=VDS, HOST_NETWORK74dfe965-cb11-495a-96a0-3dae6b3cbd75=HOST_NETWORK]', sharedLocks=''}' 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] START, GetHardwareInfoAsyncVDSCommand(HostName = ovnode102, VdsIdAndVdsVDSCommandParametersBase:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', vds='Host[ovnode102,74dfe965-cb11-495a-96a0-3dae6b3cbd75]'}), log id: 300f7345 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] FINISH, GetHardwareInfoAsyncVDSCommand, log id: 300f7345 2018-05-02 14:40:25,802-07 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:25,805-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] START, SetVdsStatusVDSCommand(HostName = ovnode102., SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', status='NonOperational', nonOperationalReason='NETWORK_UNREACHABLE', stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 7611d8d8 2018-05-02 14:40:56,722-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' 2018-05-02 14:40:56,732-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Running command: SyncNetworkProviderCommand internal: true. 2018-05-02 14:40:56,844-07 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-40) [] User admin at internal successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2018-05-02 14:40:57,001-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Lock freed to object 'EngineLock:{exclusiveLocks='[f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'default' is using 0 threads out of 1 and 5 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 8 threads waiting for tasks and 0 tasks in queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineScheduled' is using 1 threads out of 100 and 99 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1 and 0 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5 and 1 tasks are waiting in the queue. From: Yanir Quinn [mailto:yquinn at redhat.com] Sent: Wednesday, May 2, 2018 12:34 AM To: Justin Zygmont > Cc: users at ovirt.org Subject: Re: [ovirt-users] adding a host Hi, What document are you using ? See if you find the needed information here : https://ovirt.org/documentation/admin-guide/chap-Hosts/ For engine related potential errors i recommend also checking the engine.log and in UI check the events section. Regards, Yanir Quinn On Tue, May 1, 2018 at 11:11 PM, Justin Zygmont > wrote: I have tried to add a host to the engine and it just takes forever never working or giving any error message. When I look in the engine?s server.log I see it says the networks are missing. I thought when you install a node and add it to the engine it will add the networks automatically? The docs don?t give much information about this, and I can?t even remove the host through the UI. What steps are required to prepare a node when several vlans are involved? _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From jzygmont at proofpoint.com Mon May 7 04:50:16 2018 From: jzygmont at proofpoint.com (Justin Zygmont) Date: Mon, 7 May 2018 04:50:16 +0000 Subject: [ovirt-users] routing In-Reply-To: References: Message-ID: Thanks for the reply, Ok I see what you?re saying, its just confusing because there are several places that mention the gateways and none of them are clear on what they?re doing. For example, under Cluster > Networks > Manage Networks, default route is only selectable for 1 network, yet in each network you create there is still the option of choosing a IP address and gateway. Even if I don?t put in any IP or gateway for a tagged vlan, it still depends of the management gateway to forward to the router. I thought I should be able to lose the management network and still have all the tagged vlans working? From: Edward Haas [mailto:ehaas at redhat.com] Sent: Sunday, May 6, 2018 7:34 AM To: Justin Zygmont Cc: users at ovirt.org Subject: Re: [ovirt-users] routing Not sure if I understand what you are asking here, but the need for a gateway per network has emerged from the need to support other host networks (not VM networks) beside the management one. As an example, migration and storage networks can be defined, each passing dedicated traffic (one for storage communication and another for VM migration traffic), they may need to pass through different gateways. So the management network can be accessed using gateway A, storage using B and migration using C. A will usually be set on a host level as the host default gateway, and the others will be set for the individual networks. Otherwise, how would you expect storage to use a different router (than the management one) in the network? Thanks, Edy. On Thu, May 3, 2018 at 1:08 AM, Justin Zygmont > wrote: I don?t understand why you would want this unless the ovirtnode itself was actually the router, wouldn?t you want to only have an IP on the management network, and leave the rest of the VLANS blank so they depend on the router to route the traffic: NIC1 -> ovirt-mgmt - gateway set NIC2 -> VLAN3, VLAN4, etc? https://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks/ Viewing or Editing the Gateway for a Logical Network Users can define the gateway, along with the IP address and subnet mask, for a logical network. This is necessary when multiple networks exist on a host and traffic should be routed through the specified network, rather than the default gateway. If multiple networks exist on a host and the gateways are not defined, return traffic will be routed through the default gateway, which may not reach the intended destination. This would result in users being unable to ping the host. oVirt handles multiple gateways automatically whenever an interface goes up or down. _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From alkaplan at redhat.com Mon May 7 06:24:33 2018 From: alkaplan at redhat.com (Alona Kaplan) Date: Mon, 7 May 2018 09:24:33 +0300 Subject: [ovirt-users] adding a host In-Reply-To: References: Message-ID: First restart the engine. Then try to change the required network to be non-required and click on RefreshCapabilities on the host. It should move the host to active state without the need to remove it. If it doesn't help, please attach your full engine.log and server.log. Anyway please update your ovirt to version 4.2.3.4, the bug was fixed there. Thanks, Alona. On Mon, May 7, 2018 at 7:30 AM, Justin Zygmont wrote: > I?ve deselect the required status, I will try to add it again if I am ever > able to remove the old host. > > > > > > *From:* Alona Kaplan [mailto:alkaplan at redhat.com] > *Sent:* Sunday, May 6, 2018 7:31 AM > *To:* Yanir Quinn > *Cc:* Justin Zygmont ; users at ovirt.org > > *Subject:* Re: [ovirt-users] adding a host > > > > There was a bug when adding a host to a cluster that contains a required > network. It was fixed in 4.2.3.4. > > Bug-Url- https://bugzilla.redhat.com/1570388 > > > > > Thanks, > > Alona. > > > > On Sun, May 6, 2018 at 3:49 PM, Yanir Quinn wrote: > > For removing the non operational host : > > 1.Right click on the host name > > 2.Click on "Confirm host has been rebooted" > > 3.Remove the host > > > For the issue you are experiencing with host addition, according to the > engine logs you have sent, you might need to perform a few steps , see: > https://bugzilla.redhat.com/show_bug.cgi?id=1516256#c2 > > > I would also recommend to check the the host's network is not down. > > Also, during installation of the host,observe the messages in the Events > section (UI) > > Hope this helps. > > > > > > > > > > On Thu, May 3, 2018 at 10:07 PM, Justin Zygmont > wrote: > > I can?t seem to do anything to control the host from the engine, when I > select it for Maint, the engine log shows: > > > > [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] (default > task-50) [90ba81ef-21e4-4272-8c59-84786e969ff7] Running command: > MaintenanceNumberOfVdssCommand internal: false. Entities affected : ID: > 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDSAction group > MANIPULATE_HOST with role type ADMIN > > 2018-05-03 12:00:37,918-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > (default task-50) [90ba81ef-21e4-4272-8c59-84786e969ff7] START, > SetVdsStatusVDSCommand(HostName = ovnode102, > SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', > status='PreparingForMaintenance', nonOperationalReason='NONE', > stopSpmFailureLogged='true', maintenanceReason='null'}), log id: 647d5f78 > > > > > > I have only 1 host in the DC, status is Up, the cluster says host count is > 2 even though the second host stays Non Operational. I don?t know how to > remove it. > > I just installed and tried to join the DC, this is a fresh installation, > the engine was launched through cockpit. > > > > Heres what nodectl shows from the host: > > > > ovnode102 ~]# nodectl check > > Status: OK > > Bootloader ... OK > > Layer boot entries ... OK > > Valid boot entries ... OK > > Mount points ... OK > > Separate /var ... OK > > Discard is used ... OK > > Basic storage ... OK > > Initialized VG ... OK > > Initialized Thin Pool ... OK > > Initialized LVs ... OK > > Thin storage ... OK > > Checking available space in thinpool ... OK > > Checking thinpool auto-extend ... OK > > vdsmd ... OK > > > > > > > > Thanks, > > > > > > *From:* Yanir Quinn [mailto:yquinn at redhat.com] > *Sent:* Thursday, May 3, 2018 1:19 AM > > > *To:* Justin Zygmont > *Cc:* users at ovirt.org > *Subject:* Re: [ovirt-users] adding a host > > > > Did you try switching the host to maintenance mode first ? > > What is the state of the data center and how many active hosts do you have > now? > > And did you perform any updates recently or just run a fresh installation > ? if so , did you run engine-setup before launching engine ? > > > > On Thu, May 3, 2018 at 12:47 AM, Justin Zygmont > wrote: > > I read this page and it doesn?t help since this is a host that can?t be > removed, the ?remove? button is dimmed out. > > > > This is 4.22 ovirt node, but the host stays in a ?non operational? state. > I notice the logs have a lot of errors, for example: > > > > > > the SERVER log: > > > > 2018-05-02 14:40:23,847-07 WARN [org.jboss.jca.core. > connectionmanager.pool.strategy.OnePool] (ForkJoinPool-1-worker-14) > IJ000609: Attempt to return connection twice: org.jboss.jca.core. > connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL > managed connection=org.jboss.jca.adapters.jdbc.local. > LocalManagedConnection at 3f37cf10 connection handles=0 > lastReturned=1525297223847 lastValidated=1525290267811 > lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core. > connectionmanager.pool.strategy.OnePool at 20550f35 mcp= > SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] > xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 > connectionManager=5bec70d2 warned=false currentXid=null > productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] > txSync=null]: java.lang.Throwable: STACKTRACE > > at org.jboss.jca.core.connectionmanager.pool.mcp. > SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection( > SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:722) > > at org.jboss.jca.core.connectionmanager.pool.mcp. > SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection( > SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:611) > > at org.jboss.jca.core.connectionmanager.pool. > AbstractPool.returnConnection(AbstractPool.java:847) > > at org.jboss.jca.core.connectionmanager.AbstractConnectionManager. > returnManagedConnection(AbstractConnectionManager.java:725) > > at org.jboss.jca.core.connectionmanager.tx. > TxConnectionManagerImpl.managedConnectionDisconnected( > TxConnectionManagerImpl.java:585) > > at org.jboss.jca.core.connectionmanager.AbstractConnectionManager. > disconnectManagedConnection(AbstractConnectionManager.java:988) > > at org.jboss.jca.core.connectionmanager.AbstractConnectionManager. > reconnectManagedConnection(AbstractConnectionManager.java:974) > > at org.jboss.jca.core.connectionmanager.AbstractConnectionManager. > allocateConnection(AbstractConnectionManager.java:792) > > at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection( > WrapperDataSource.java:138) > > at org.jboss.as.connector.subsystems.datasources. > WildFlyDataSource.getConnection(WildFlyDataSource.java:64) > > at org.springframework.jdbc.datasource.DataSourceUtils. > doGetConnection(DataSourceUtils.java:111) [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.datasource.DataSourceUtils. > getConnection(DataSourceUtils.java:77) [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$ > PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) > [dal.jar:] > > at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$ > PostgresSimpleJdbcCall.doExecute(PostgresDbEngineDialect.java:118) > [dal.jar:] > > at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler. > executeImpl(SimpleJdbcCallsHandler.java:135) [dal.jar:] > > at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler. > executeReadList(SimpleJdbcCallsHandler.java:105) [dal.jar:] > > at org.ovirt.engine.core.dao.VmDynamicDaoImpl.getAllRunningForVds(VmDynamicDaoImpl.java:52) > [dal.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.isVmRunningOnHost( > HostNetworkTopologyPersisterImpl.java:210) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.logChangedDisplayNetwork( > HostNetworkTopologyPersisterImpl.java:179) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.auditNetworkCompliance( > HostNetworkTopologyPersisterImpl.java:148) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.lambda$persistAndEnforceNetworkCompli > ance$0(HostNetworkTopologyPersisterImpl.java:100) [vdsbroker.jar:] > > at org.ovirt.engine.core.utils.tr > > ansaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:202) > [utils.jar:] > > at org.ovirt.engine.core.utils.tr > > ansaction.TransactionSupport.executeInRequired(TransactionSupport.java:137) > [utils.jar:] > > at org.ovirt.engine.core.utils.tr > > ansaction.TransactionSupport.executeInScope(TransactionSupport.java:105) > [utils.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance( > HostNetworkTopologyPersisterImpl.java:93) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.vdsbroker. > HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance( > HostNetworkTopologyPersisterImpl.java:154) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.VdsManager. > processRefreshCapabilitiesResponse(VdsManager.java:794) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.VdsManager. > handleRefreshCapabilitiesResponse(VdsManager.java:598) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.VdsManager. > refreshHostSync(VdsManager.java:567) [vdsbroker.jar:] > > at org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand > .executeCommand(RefreshHostCapabilitiesCommand.java:41) [bll.jar:] > > at org.ovirt.engine.core.bll.CommandBase. > executeWithoutTransaction(CommandBase.java:1133) [bll.jar:] > > at org.ovirt.engine.core.bll.CommandBase. > executeActionInTransactionScope(CommandBase.java:1285) [bll.jar:] > > at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1934) > [bll.jar:] > > at org.ovirt.engine.core.utils.tr > > ansaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) > [utils.jar:] > > at org.ovirt.engine.core.utils.tr > > ansaction.TransactionSupport.executeInScope(TransactionSupport.java:103) > [utils.jar:] > > at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1345) > [bll.jar:] > > at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:400) > [bll.jar:] > > at org.ovirt.engine.core.bll.executor. > DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13) > [bll.jar:] > > at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:468) > [bll.jar:] > > at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:450) > [bll.jar:] > > at org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:393) > [bll.jar:] > > at sun.reflect.GeneratedMethodAccessor158.invoke(Unknown Source) > [:1.8.0_161] > > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] > > at java.lang.reflect.Method.invoke(Method.java:498) > [rt.jar:1.8.0_161] > > at org.jboss.as.ee.component.ManagedReferenceMethodIntercep > tor.processInvocation(ManagedReferenceMethodInterceptor.java:52) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.InterceptorContext$Invocation. > proceed(InterceptorContext.java:509) > > at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor. > delegateInterception(Jsr299BindingsInterceptor.java:78) > > at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor. > doMethodInterception(Jsr299BindingsInterceptor.java:88) > > at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor. > processInvocation(Jsr299BindingsInterceptor.java:101) > > at org.jboss.as.ee.component.interceptors. > UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.in > > vocationmetrics.ExecutionTimeInterceptor.processInvocation( > ExecutionTimeInterceptor.java:43) [wildfly-ejb3-11.0.0.Final. > jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor. > processInvocation(ConcurrentContextInterceptor.java:45) > [wildfly-ee-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.InitialInterceptor.processInvocation( > InitialInterceptor.java:40) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.ChainedInterceptor.processInvocation( > ChainedInterceptor.java:53) > > at org.jboss.as.ee.component.interceptors. > ComponentDispatcherInterceptor.processInvocation( > ComponentDispatcherInterceptor.java:52) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.si > > ngleton.SingletonComponentInstanceAssociationInterceptor. > processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:264) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:379) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:244) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.InterceptorContext$Invocation. > proceed(InterceptorContext.java:509) > > at org.jboss.weld.ejb.AbstractEJBRequestScopeActivat > ionInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > > at org.jboss.as.weld.ejb.EjbRequestScopeActivationInter > ceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.in > > terceptors.CurrentInvocationContextInterceptor.processInvocation( > CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-11.0.0.Final. > jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.in > > vocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.security.SecurityContextInterceptor. > processInvocation(SecurityContextInterceptor.java:100) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.deployment.processors. > StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.in > > terceptors.ShutDownInterceptorFactory$1.processInvocation( > ShutDownInterceptorFactory.java:64) [wildfly-ejb3-11.0.0.Final. > jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ejb3.component.in > > terceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67) > [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.as.ee.component.Name > > spaceContextInterceptor.processInvocation(NamespaceContextInterceptor. > java:50) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.ContextClassLoaderInterceptor. > processInvocation(ContextClassLoaderInterceptor.java:60) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.InterceptorContext.run( > InterceptorContext.java:438) > > at org.wildfly.security.manager.WildFlySecurityManager.doChecked( > WildFlySecurityManager.java:609) > > at org.jboss.invocation.AccessCheckingInterceptor. > processInvocation(AccessCheckingInterceptor.java:57) > > at org.jboss.invocation.InterceptorContext.proceed( > InterceptorContext.java:422) > > at org.jboss.invocation.ChainedInterceptor.processInvocation( > ChainedInterceptor.java:53) > > at org.jboss.as.ee.component.ViewService$View.invoke( > ViewService.java:198) > > at org.jboss.as.ee.component.ViewDescription$1.processInvocation( > ViewDescription.java:185) > > at org.jboss.as.ee.component.ProxyInvocationHandler.invoke( > ProxyInvocationHandler.java:81) > > at org.ovirt.engine.core.bll.interfaces.BackendInternal$$$ > view4.runInternalAction(Unknown Source) [bll.jar:] > > at sun.reflect.GeneratedMethodAccessor157.invoke(Unknown Source) > [:1.8.0_161] > > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] > > at java.lang.reflect.Method.invoke(Method.java:498) > [rt.jar:1.8.0_161] > > at org.jboss.weld.util.reflection.Reflections. > invokeAndUnwrap(Reflections.java:433) [weld-core-impl-2.4.3.Final. > jar:2.4.3.Final] > > at org.jboss.weld.bean.proxy.EnterpriseBeanProxyMethodHandl > er.invoke(EnterpriseBeanProxyMethodHandler.java:127) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > > at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke( > EnterpriseTargetBeanInstance.java:56) [weld-core-impl-2.4.3.Final. > jar:2.4.3.Final] > > at org.jboss.weld.bean.proxy.InjectionPointPropagatingEnter > priseTargetBeanInstance.invoke(InjectionPointPropagatingEnter > priseTargetBeanInstance.java:67) [weld-core-impl-2.4.3.Final. > jar:2.4.3.Final] > > at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:100) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > > at org.ovirt.engine.core.bll.BackendCommandObjectsHandler$ > BackendInternal$BackendLocal$2049259618$Proxy$_$$_Weld$EnterpriseProxy$.runInternalAction(Unknown > Source) [bll.jar:] > > at org.ovirt.engine.core.bll.VdsEventListener. > refreshHostCapabilities(VdsEventListener.java:598) [bll.jar:] > > at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$ > SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext( > HostConnectionRefresher.java:47) [vdsbroker.jar:] > > at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$ > SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext( > HostConnectionRefresher.java:30) [vdsbroker.jar:] > > at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$ > EventCallable.call(EventPublisher.java:118) [vdsm-jsonrpc-java-client.jar: > ] > > at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$ > EventCallable.call(EventPublisher.java:93) [vdsm-jsonrpc-java-client.jar:] > > at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424) > [rt.jar:1.8.0_161] > > at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) > [rt.jar:1.8.0_161] > > at java.util.concurrent.ForkJoinPool$WorkQueue. > runTask(ForkJoinPool.java:1056) [rt.jar:1.8.0_161] > > at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) > [rt.jar:1.8.0_161] > > at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) > [rt.jar:1.8.0_161] > > > > 2018-05-02 14:40:23,851-07 WARN [com.arjuna.ats.arjuna] > (ForkJoinPool-1-worker-14) ARJUNA012077: Abort called on already aborted > atomic action 0:ffff7f000001:-21bd8800:5ae90c48:10afa > > > > > > > > > > > > And the ENGINE log: > > > > 2018-05-02 14:40:23,851-07 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] > (ForkJoinPool-1-worker-14) [52276df5] transaction rolled back > > 2018-05-02 14:40:23,851-07 ERROR [org.ovirt.engine.core.vdsbroker.VdsManager] > (ForkJoinPool-1-worker-14) [52276df5] Unable to RefreshCapabilities: > IllegalStateException: Transaction Local transaction > (delegate=TransactionImple < ac, BasicAction: 0:ffff7f000001:-21bd8800:5ae90c48:10afa > status: ActionStatus.ABORTED >, owner=Local transaction context for > provider JBoss JTA transaction provider) is not active STATUS_ROLLEDBACK > > 2018-05-02 14:40:23,888-07 INFO [org.ovirt.engine.core.bll. > HandleVdsCpuFlagsOrClusterChangedCommand] (ForkJoinPool-1-worker-14) > [5c511e51] Running command: HandleVdsCpuFlagsOrClusterChangedCommand > internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 > Type: VDS > > 2018-05-02 14:40:23,895-07 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] > (ForkJoinPool-1-worker-14) [2a0ec90b] Running command: > HandleVdsVersionCommand internal: true. Entities affected : ID: > 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS > > 2018-05-02 14:40:23,898-07 INFO [org.ovirt.engine.core.bll. > RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] > Refresh host capabilities finished. Lock released. Monitoring can run now > for host 'ovnode102 from data-center 'Default' > > 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll. > RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] > Command 'org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand' > failed: Could not get JDBC Connection; nested exception is > java.sql.SQLException: javax.resource.ResourceException: IJ000457: > Unchecked throwable in managedConnectionReconnected() > cl=org.jboss.jca.core.connectionmanager.listener. > TxConnectionListener at 1fab84d7[state=NORMAL managed > connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection@ > 3f37cf10 connection handles=0 lastReturned=1525297223847 > lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false > pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 > mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] > xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 > connectionManager=5bec70d2 warned=false currentXid=null > productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] > txSync=null] > > 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll. > RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] > Exception: org.springframework.jdbc.CannotGetJdbcConnectionException: > Could not get JDBC Connection; nested exception is java.sql.SQLException: > javax.resource.ResourceException: IJ000457: Unchecked throwable in > managedConnectionReconnected() cl=org.jboss.jca.core. > connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL > managed connection=org.jboss.jca.adapters.jdbc.local. > LocalManagedConnection at 3f37cf10 connection handles=0 > lastReturned=1525297223847 lastValidated=1525290267811 > lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core. > connectionmanager.pool.strategy.OnePool at 20550f35 mcp= > SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] > xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 > connectionManager=5bec70d2 warned=false currentXid=null > productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] > txSync=null] > > at org.springframework.jdbc.datasource.DataSourceUtils. > getConnection(DataSourceUtils.java:80) [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) > [spring-jdbc.jar:4.3.9.RELEASE] > > at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$ > PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) > [dal.jar:] > > . > > . > > . > > . > > 2018-05-02 14:40:23,907-07 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-14) > [2a0ec90b] EVENT_ID: HOST_REFRESH_CAPABILITIES_FAILED(607), Failed to > refresh the capabilities of host ovnode102. > > 2018-05-02 14:40:23,907-07 INFO [org.ovirt.engine.core.bll. > RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] > Lock freed to object 'EngineLock:{exclusiveLocks='[ > 74dfe965-cb11-495a-96a0-3dae6b3cbd75=VDS, HOST_NETWORK74dfe965-cb11- > 495a-96a0-3dae6b3cbd75=HOST_NETWORK]', sharedLocks=''}' > > 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] START, > GetHardwareInfoAsyncVDSCommand(HostName = ovnode102, > VdsIdAndVdsVDSCommandParametersBase:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', > vds='Host[ovnode102,74dfe965-cb11-495a-96a0-3dae6b3cbd75]'}), log id: > 300f7345 > > 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] FINISH, > GetHardwareInfoAsyncVDSCommand, log id: 300f7345 > > 2018-05-02 14:40:25,802-07 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] Running > command: SetNonOperationalVdsCommand internal: true. Entities affected : > ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS > > 2018-05-02 14:40:25,805-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] START, > SetVdsStatusVDSCommand(HostName = ovnode102., > SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', > status='NonOperational', nonOperationalReason='NETWORK_UNREACHABLE', > stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 7611d8d8 > > 2018-05-02 14:40:56,722-07 INFO [org.ovirt.engine.core.bll.pro > > vider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) > [33bdda7f] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ > f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' > > 2018-05-02 14:40:56,732-07 INFO [org.ovirt.engine.core.bll.pro > > vider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) > [33bdda7f] Running command: SyncNetworkProviderCommand internal: true. > > 2018-05-02 14:40:56,844-07 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] > (default task-40) [] User admin at internal successfully logged in with > scopes: ovirt-app-api ovirt-ext=token-info:authz-search > ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate > ovirt-ext=token:password-access > > 2018-05-02 14:40:57,001-07 INFO [org.ovirt.engine.core.bll.pro > > vider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) > [33bdda7f] Lock freed to object 'EngineLock:{exclusiveLocks='[ > f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'default' is using 0 threads out of 1 and 5 tasks are waiting in the queue. > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'engine' is using 0 threads out of 500, 8 threads waiting for tasks and 0 > tasks in queue. > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'engineScheduled' is using 1 threads out of 100 and 99 tasks are waiting in > the queue. > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'engineThreadMonitoring' is using 1 threads out of 1 and 0 tasks are > waiting in the queue. > > 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'hostUpdatesChecker' is using 0 threads out of 5 and 1 tasks are waiting in > the queue. > > > > > > > > > > > > > > > > > > > > > > > > *From:* Yanir Quinn [mailto:yquinn at redhat.com] > *Sent:* Wednesday, May 2, 2018 12:34 AM > *To:* Justin Zygmont > *Cc:* users at ovirt.org > *Subject:* Re: [ovirt-users] adding a host > > > > Hi, > > What document are you using ? > > See if you find the needed information here : https://ovirt.org/ > documentation/admin-guide/chap-Hosts/ > > > > > For engine related potential errors i recommend also checking the > engine.log and in UI check the events section. > > Regards, > > Yanir Quinn > > > > On Tue, May 1, 2018 at 11:11 PM, Justin Zygmont > wrote: > > I have tried to add a host to the engine and it just takes forever never > working or giving any error message. When I look in the engine?s > server.log I see it says the networks are missing. > > I thought when you install a node and add it to the engine it will add the > networks automatically? The docs don?t give much information about this, > and I can?t even remove the host through the UI. What steps are required > to prepare a node when several vlans are involved? > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Joseph.Kelly at tradingscreen.com Mon May 7 07:04:13 2018 From: Joseph.Kelly at tradingscreen.com (Joseph Kelly) Date: Mon, 7 May 2018 07:04:13 +0000 Subject: [ovirt-users] Data Center Upgrade to 3.6 fails with "Upgrading a pool while an upgrade is in process is unsupported" - any fix for this ? Message-ID: Hello All - I'm trying to upgrade a Data Center in ovirt 3.6.4 from 3.4 to 3.6 but I'm getting this error when I tried: May 7, 2018 3:07:55 PM VDSM command failed: Upgrading a pool while an upgrade is in process is unsupported (pool: `9667c5e8-97b1-4e09-be44-2696a69f8959`) Please note that all clusters are already at 3.6 compatability and the final step is to upgrade the Data Center to 3.6. So I've tried these steps as a workaorund from the RHEV tech-doc below but it hasn't worked for me: Deactivate the Export and ISO domains. Switch SPM to another host. Try the upgrade again Activate Export and ISO domains as required. (From https://access.redhat.com/solutions/2332581 ) And I also tried to search mail-archive.com for past Email but didn't find anything matching this particular issue. https://www.mail-archive.com/search?l=users%40ovirt.org&q=3.6+compat&x=0&y=0 https://www.mail-archive.com/search?l=users%40ovirt.org&q=ovirt+Data+center+compat&x=0&y=0 So has anyone found another way to fix this problem e.g. does the ovirt DB need to be directly updated ? Thanks, Joe. -- J. Kelly Infrastructure Engineer TradingScreen www.tradingscreen.com email: joseph.kelly at tradingscreen.com Follow TradingScreen on Twitter, Facebook, or our blog, Trading Smarter This message is intended only for the recipient(s) named above and may contain confidential information. If you are not an intended recipient, you should not review, distribute or copy this message. Please notify the sender immediately by e-mail if you have received this message in error and delete it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmirecki at redhat.com Mon May 7 07:41:52 2018 From: mmirecki at redhat.com (Marcin Mirecki) Date: Mon, 7 May 2018 09:41:52 +0200 Subject: [ovirt-users] Problems with OVN In-Reply-To: <5AEEBFEC.5070901@neutraali.net> References: <5AEEBFEC.5070901@neutraali.net> Message-ID: Hi Samuli, Let's first make sure the configuration is correct. How did you configure the env? Did you use the automatic engine-setup configuration? Can you please send me the output of the following: on engine: ovn-sbctl show ovn-nbctl show on hosts: ip addr ovs-vsctl show The 'vdsm-tool ovn-config' command configures the ovn controller to use the first ip as the ovn central, and the local tunnel to use the second one. Regards, Marcin On Sun, May 6, 2018 at 10:42 AM, Samuli Heinonen wrote: > Hi all, > > I'm building a home lab using oVirt+GlusterFS in hyperconverged(ish) setup. > > My setup consists of 2x nodes with ASRock H110M-STX motherboard, Intel > Pentium G4560 3,5 GHz CPU and 16 GB RAM. Motherboard has integrated Intel > Gigabit I219V LAN. At the moment I'm using RaspberryPi as Gluster arbiter > node. Nodes are connected to basic "desktop switch" without any management > available. > > Hardware is nowhere near perfect, but it get its job done and is enough > for playing around. However I'm having problems getting OVN to work > properly and I'm clueless where to look next. > > oVirt is setup like this: > oVirt engine host oe / 10.0.1.101 > oVirt hypervisor host o2 / 10.0.1.18 > oVirt hypervisor host o3 / 10.0.1.21 > OVN network 10.0.200.0/24 > > When I spin up a VM in o2 and o3 with IP address in network 10.0.1.0/24 > everything works fine. VMs can interact between each other without any > problems. > > Problems show up when I try to use OVN based network between virtual > machines. If virtual machines are on same hypervisor then everything seems > to work ok. But if I have virtual machine on hypervisor o2 and another one > on hypervisor o3 then TCP connections doesn't work very well. UDP seems to > be ok and it's possible to ping hosts, do dns & ntp queries and so on. > > Problem with TCP is that for example when taking SSH connection to another > host at some point connection just hangs and most of the time it's not even > possible to even log in before connectiong hangs. If I look into tcpdump at > that point it looks like packets never reach destination. Also, if I have > multiple connections, then all of them hang at the same time. > > I have tried switching off tx checksum and other similar settings, but it > didn't make any difference. > > I'm suspecting that hardware is not good enough. Before investigating into > new hardware I'd like to get some confirmation that everything is setup > correctly. > > When setting up oVirt/OVN I had to run following undocumented command to > get it working at all: vdsm-tool ovn-config 10.0.1.101 10.0.1.21 (oVirt > engine IP, hypervisor IP). Especially this makes me think that I have > missed some crucial part in configuration. > > On oVirt engine in /var/log/openvswitch/ovsdb-server-nb.log there are > error messages: > 2018-05-06T08:30:05.418Z|00913|stream_ssl|WARN|SSL_read: unexpected SSL > connection close > 2018-05-06T08:30:05.418Z|00914|jsonrpc|WARN|ssl:127.0.0.1:53152: receive > error: Protocol error > 2018-05-06T08:30:05.419Z|00915|reconnect|WARN|ssl:127.0.0.1:53152: > connection dropped (Protocol error) > > To be honest, I'm not sure what's causing those error messages or are they > related. I found out some bug reports stating that they are not critical. > > Any ideas what to do next or should I just get better hardware? :) > > Best regards, > Samuli Heinonen > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bernhard at bdick.de Mon May 7 08:50:52 2018 From: bernhard at bdick.de (Bernhard Dick) Date: Mon, 7 May 2018 10:50:52 +0200 Subject: [ovirt-users] ovirt engine frequently rebooting/changing host Message-ID: <49a043bd-bc88-ebd5-5cad-35501df4a74b@bdick.de> Hi, currently I'm evaluating oVirt and I have three hosts installed within nested KVM. They're sharing a gluster environment which has been configured using the oVirt Node Wizards. It seems to work quite well, but after some hours I get many status update mails from the ovirt engine which are either going to EngineStop or EngeineForceStop. Sometimes the host where the engine runs is switched. After some of those reboots there is silence for some hours before it is starting over. Can you tell me where I should look at to fix that problem? Regards Bernhard Dick From didi at redhat.com Mon May 7 09:23:22 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Mon, 7 May 2018 12:23:22 +0300 Subject: [ovirt-users] ovirt engine frequently rebooting/changing host In-Reply-To: <49a043bd-bc88-ebd5-5cad-35501df4a74b@bdick.de> References: <49a043bd-bc88-ebd5-5cad-35501df4a74b@bdick.de> Message-ID: On Mon, May 7, 2018 at 11:50 AM, Bernhard Dick wrote: > Hi, > > currently I'm evaluating oVirt and I have three hosts installed within > nested KVM. They're sharing a gluster environment which has been configured > using the oVirt Node Wizards. > It seems to work quite well, but after some hours I get many status update > mails from the ovirt engine which are either going to EngineStop or > EngeineForceStop. Sometimes the host where the engine runs is switched. > After some of those reboots there is silence for some hours before it is > starting over. Can you tell me where I should look at to fix that problem? You can check, on all hosts, /var/log/ovirt-hosted-engine-ha/* . Good luck, -- Didi From marceloltmm at gmail.com Mon May 7 11:20:39 2018 From: marceloltmm at gmail.com (Marcelo Leandro) Date: Mon, 7 May 2018 08:20:39 -0300 Subject: [ovirt-users] problem to create snapshot In-Reply-To: References: Message-ID: Good morning, vdsm-client StorageDomain getInfo storagedomainID= : { "uuid": "6e5cce71-3438-4045-9d54-607123e0557e", "type": "ISCSI", "vguuid": "7JOYDc-iQgm-11Pk-2czh-S8k0-Qc5U-f1npF1", "metadataDevice": "36005076300810a4db800000000000002", "state": "OK", "version": "4", "role": "Regular", "vgMetadataDevice": "36005076300810a4db800000000000002", "class": "Data", "pool": [ "77e24b20-9d21-4952-a089-3c5c592b4e6d" ], "name": "IBM01" } I am have 3 VMs runs in this storage, i am migrate vms to another storages but this vms show the same error when try migrate on because need snapshots. Very Thanks. 2018-05-06 11:26 GMT-03:00 Ala Hino : > [Please always CC ovirt-users so other engineer can provide help] > > It seems that the storage domain is corrupted. > Can you please run the following command and send the output? > > vdsm-client StorageDomain getInfo storagedomainID= > > You may need to move the storage to maintenance and re-initialize it. > > On Thu, May 3, 2018 at 10:10 PM, Marcelo Leandro > wrote: > >> Hello, >> >> Thank you for reply: >> >> oVirt Version - 4.1.9 >> Vdsm Versoin - 4.20.23 >> >> attached logs, >> >> Very Thanks. >> >> Marcelo Leandro >> >> 2018-05-03 15:59 GMT-03:00 Ala Hino : >> >>> Can you please share more info? >>> - The version you are using >>> - Full log of vdsm and the engine >>> >>> Is the VM running or down while creating the snapshot? >>> >>> On Thu, May 3, 2018 at 8:32 PM, Marcelo Leandro >>> wrote: >>> >>>> Anyone help me? >>>> >>>> 2018-05-02 17:55 GMT-03:00 Marcelo Leandro : >>>> >>>>> Hello , >>>>> >>>>> I am geting error when try do a snapshot: >>>>> >>>>> Error msg in SPM log. >>>>> >>>>> 2018-05-02 17:46:11,235-0300 WARN (tasks/2) [storage.ResourceManager] >>>>> Resource factory failed to create resource '01_img_6e5cce71-3438-4045-9d5 >>>>> 4-607123e0557e.ed7f1c0f-5986-4979-b783-5c465b0854c6'. Canceling >>>>> request. (resourceManager:543) >>>>> Traceback (most recent call last): >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", >>>>> line 539, in registerResource >>>>> obj = namespaceObj.factory.createResource(name, lockType) >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", >>>>> line 193, in createResource >>>>> lockType) >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", >>>>> line 122, in __getResourceCandidatesList >>>>> imgUUID=resourceName) >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line >>>>> 213, in getChain >>>>> if srcVol.isLeaf(): >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", >>>>> line 1430, in isLeaf >>>>> return self._manifest.isLeaf() >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", >>>>> line 138, in isLeaf >>>>> return self.getVolType() == sc.type2name(sc.LEAF_VOL) >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", >>>>> line 134, in getVolType >>>>> self.voltype = self.getMetaParam(sc.VOLTYPE) >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", >>>>> line 118, in getMetaParam >>>>> meta = self.getMetadata() >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", >>>>> line 112, in getMetadata >>>>> md = VolumeMetadata.from_lines(lines) >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/volumemetadata.py", >>>>> line 103, in from_lines >>>>> "Missing metadata key: %s: found: %s" % (e, md)) >>>>> MetaDataKeyNotFoundError: Meta Data key not found error: ("Missing >>>>> metadata key: 'DOMAIN': found: {'NONE': '############################# >>>>> ############################################################ >>>>> ############################################################ >>>>> ############################################################ >>>>> ############################################################ >>>>> ############################################################ >>>>> ############################################################ >>>>> ############################################################ >>>>> #####################################################'}",) >>>>> 2018-05-02 17:46:11,286-0300 WARN (tasks/2) >>>>> [storage.ResourceManager.Request] (ResName='01_img_6e5cce71-3438 >>>>> -4045-9d54-607123e0557e.ed7f1c0f-5986-4979-b783-5c465b0854c6', >>>>> ReqID='a3cd9388-977b-45b9-9aa0-e431aeff8750') Tried to cancel a >>>>> processed request (resourceManager:187) >>>>> 2018-05-02 17:46:11,286-0300 ERROR (tasks/2) >>>>> [storage.TaskManager.Task] (Task='ba0766ca-08a1-4d65-a4e9-1e0171939037') >>>>> Unexpected error (task:875) >>>>> Traceback (most recent call last): >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line >>>>> 882, in _run >>>>> return fn(*args, **kargs) >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line >>>>> 336, in run >>>>> return self.cmd(*self.argslist, **self.argsdict) >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", >>>>> line 79, in wrapper >>>>> return method(self, *args, **kwargs) >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line >>>>> 1938, in createVolume >>>>> with rm.acquireResource(img_ns, imgUUID, rm.EXCLUSIVE): >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", >>>>> line 1025, in acquireResource >>>>> return _manager.acquireResource(namespace, name, lockType, >>>>> timeout=timeout) >>>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", >>>>> line 475, in acquireResource >>>>> raise se.ResourceAcqusitionFailed() >>>>> ResourceAcqusitionFailed: Could not acquire resource. Probably >>>>> resource factory threw an exception.: () >>>>> >>>>> >>>>> Anyone help? >>>>> >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiri.slezka at slu.cz Mon May 7 15:41:03 2018 From: jiri.slezka at slu.cz (=?UTF-8?B?SmnFmcOtIFNsw6nFvmth?=) Date: Mon, 7 May 2018 17:41:03 +0200 Subject: [ovirt-users] sun.security.validator.ValidatorException after update to 4.2.3 Message-ID: <9f8dc490-6470-beac-68c7-e0423d8a257f@slu.cz> Hi, after upgrade ovirt from 4.2.2 to 4.2.3.5-1.el7.centos I cannot login into admin portal because sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target I am using custom 3rd party certificate Any hints how to resolve this issue? Thanks in advance, Jiri Slezka -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3716 bytes Desc: S/MIME Cryptographic Signature URL: From michal.skrivanek at redhat.com Mon May 7 16:19:05 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Mon, 7 May 2018 18:19:05 +0200 Subject: [ovirt-users] oVirt and Oracle Linux support In-Reply-To: References: Message-ID: <93E35489-8D4F-4EA7-9CB5-DC9EA61E28AC@redhat.com> > On 3 May 2018, at 11:40, Simon Coter wrote: > > Hi, > > I'm new to this ML. > I started to play a bit with oVirt and I saw that you actually support RH and CentOS. > Is there any plan to also support Oracle Linux ? nope why would you want to use OL? Supposing you do not want to run it together with Oracle server. If you want to pay for OS buy RHEL, at least those $ would be used to improve oVirt;-) > I had to fight so much to get everything on OL correctly running. yeah, it?s probably doable if you mix it with centos packages manually probably not worth the effort maintaining it Thanks, michal > Thanks > > Simon > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From samppah at neutraali.net Mon May 7 17:44:20 2018 From: samppah at neutraali.net (Samuli Heinonen) Date: Mon, 07 May 2018 20:44:20 +0300 Subject: [ovirt-users] Problems with OVN In-Reply-To: References: <5AEEBFEC.5070901@neutraali.net> Message-ID: <5AF09074.1060304@neutraali.net> Hi Marcin, Thank you for your response. I used engine-setup to do the configuration. Only exception is that I had to run "vdsm-tool ovn-config engine-ip local-ip" (ie. vdsm-tool ovn-config 10.0.1.101 10.0.1.21) on hypervisors. Here is the output of requested commands: [root at oe ~]# ovn-sbctl show Chassis "049183d5-61b6-4b9c-bae3-c7b10d30f8cb" hostname: "o2.hirundinidae.local" Encap geneve ip: "10.0.1.18" options: {csum="true"} Port_Binding "87c5e44a-7c8b-41b2-89a6-fa52f27643ed" Chassis "972f1b7b-10de-4e4f-a5f9-f080890f087d" hostname: "o3.hirundinidae.local" Encap geneve ip: "10.0.1.21" options: {csum="true"} Port_Binding "ccea5185-3efa-4d9c-9475-9e46009fea4f" Port_Binding "e868219c-f16c-45c6-b7b1-72d044fee602" [root at oe ~]# ovn-nbctl show switch 7d264a6c-ea48-4a6d-9663-5244102dc9bb (vm-private) port 4ec3ecf6-d04a-406c-8354-c5e195ffde05 addresses: ["00:1a:4a:16:01:06 dynamic"] switch 40aedb7d-b1c3-400e-9ddb-16bee3bb312a (vm-public) port 87c5e44a-7c8b-41b2-89a6-fa52f27643ed addresses: ["00:1a:4a:16:01:03"] port ccea5185-3efa-4d9c-9475-9e46009fea4f addresses: ["00:1a:4a:16:01:0c"] port e868219c-f16c-45c6-b7b1-72d044fee602 addresses: ["00:1a:4a:16:01:0a"] [root at o2 ~]# ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s31f6: mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP qlen 1000 link/ether 78:f2:9e:90:bc:64 brd ff:ff:ff:ff:ff:ff 3: enp0s20f0u5c2: mtu 1500 qdisc pfifo_fast master public state UNKNOWN qlen 1000 link/ether 50:3e:aa:4c:9b:01 brd ff:ff:ff:ff:ff:ff 4: ovs-system: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 82:49:e1:15:af:56 brd ff:ff:ff:ff:ff:ff 5: br-int: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether a2:bb:78:7e:35:4b brd ff:ff:ff:ff:ff:ff 21: public: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 50:3e:aa:4c:9b:01 brd ff:ff:ff:ff:ff:ff inet6 fe80::523e:aaff:fe4c:9b01/64 scope link valid_lft forever preferred_lft forever 22: ovirtmgmt: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 78:f2:9e:90:bc:64 brd ff:ff:ff:ff:ff:ff inet 10.0.1.18/24 brd 10.0.1.255 scope global ovirtmgmt valid_lft forever preferred_lft forever inet6 fe80::7af2:9eff:fe90:bc64/64 scope link valid_lft forever preferred_lft forever 23: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN qlen 1000 link/ether 02:c0:7a:e3:4e:76 brd ff:ff:ff:ff:ff:ff inet6 fe80::c0:7aff:fee3:4e76/64 scope link valid_lft forever preferred_lft forever 24: ;vdsmdummy;: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether a2:2f:f2:58:88:da brd ff:ff:ff:ff:ff:ff 26: vnet0: mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN qlen 1000 link/ether fe:1a:4a:16:01:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc1a:4aff:fe16:103/64 scope link valid_lft forever preferred_lft forever 29: vnet1: mtu 1500 qdisc pfifo_fast master ovirtmgmt state UNKNOWN qlen 1000 link/ether fe:1a:4a:16:01:05 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc1a:4aff:fe16:105/64 scope link valid_lft forever preferred_lft forever [root at o2 ~]# ovs-vsctl show 6be6d37c-74cf-485e-9957-f8eb4bddb2ca Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port "ovn-972f1b-0" Interface "ovn-972f1b-0" type: geneve options: {csum="true", key=flow, remote_ip="10.0.1.21"} Port "vnet0" Interface "vnet0" ovs_version: "2.9.0" [root at o3 ~]# ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s31f6: mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP qlen 1000 link/ether 78:f2:9e:90:bc:50 brd ff:ff:ff:ff:ff:ff 3: enp0s20f0u5c2: mtu 1500 qdisc pfifo_fast master public state UNKNOWN qlen 1000 link/ether 50:3e:aa:4c:9c:03 brd ff:ff:ff:ff:ff:ff 4: ovs-system: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 7e:43:c1:b0:48:73 brd ff:ff:ff:ff:ff:ff 5: br-int: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 3a:fe:68:34:31:4c brd ff:ff:ff:ff:ff:ff 21: public: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 50:3e:aa:4c:9c:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::523e:aaff:fe4c:9c03/64 scope link valid_lft forever preferred_lft forever 22: ovirtmgmt: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 78:f2:9e:90:bc:50 brd ff:ff:ff:ff:ff:ff inet 10.0.1.21/24 brd 10.0.1.255 scope global ovirtmgmt valid_lft forever preferred_lft forever inet6 fe80::7af2:9eff:fe90:bc50/64 scope link valid_lft forever preferred_lft forever 24: ;vdsmdummy;: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 02:92:3f:89:f2:c7 brd ff:ff:ff:ff:ff:ff 25: vnet0: mtu 1500 qdisc pfifo_fast master ovirtmgmt state UNKNOWN qlen 1000 link/ether fe:16:3e:0b:b1:2d brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe0b:b12d/64 scope link valid_lft forever preferred_lft forever 27: vnet2: mtu 1500 qdisc pfifo_fast master ovirtmgmt state UNKNOWN qlen 1000 link/ether fe:1a:4a:16:01:0b brd ff:ff:ff:ff:ff:ff inet6 fe80::fc1a:4aff:fe16:10b/64 scope link valid_lft forever preferred_lft forever 29: vnet4: mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN qlen 1000 link/ether fe:1a:4a:16:01:0c brd ff:ff:ff:ff:ff:ff inet6 fe80::fc1a:4aff:fe16:10c/64 scope link valid_lft forever preferred_lft forever 31: vnet6: mtu 1500 qdisc pfifo_fast master ovirtmgmt state UNKNOWN qlen 1000 link/ether fe:1a:4a:16:01:07 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc1a:4aff:fe16:107/64 scope link valid_lft forever preferred_lft forever 32: vnet7: mtu 1500 qdisc pfifo_fast master public state UNKNOWN qlen 1000 link/ether fe:1a:4a:16:01:09 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc1a:4aff:fe16:109/64 scope link valid_lft forever preferred_lft forever 33: vnet8: mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN qlen 1000 link/ether fe:1a:4a:16:01:0a brd ff:ff:ff:ff:ff:ff inet6 fe80::fc1a:4aff:fe16:10a/64 scope link valid_lft forever preferred_lft forever 34: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN qlen 1000 link/ether 46:88:1c:22:6f:c3 brd ff:ff:ff:ff:ff:ff inet6 fe80::4488:1cff:fe22:6fc3/64 scope link valid_lft forever preferred_lft forever [root at o3 ~]# ovs-vsctl show 8c2c19fc-d9e4-423d-afcb-f5ecff602ca7 Bridge br-int fail_mode: secure Port "vnet4" Interface "vnet4" Port "ovn-049183-0" Interface "ovn-049183-0" type: geneve options: {csum="true", key=flow, remote_ip="10.0.1.18"} Port "vnet8" Interface "vnet8" Port br-int Interface br-int type: internal ovs_version: "2.9.0" Best regards, Samuli Marcin Mirecki wrote: > Hi Samuli, > > Let's first make sure the configuration is correct. > How did you configure the env? Did you use the automatic engine-setup > configuration? > > Can you please send me the output of the following: > > on engine: > ovn-sbctl show > ovn-nbctl show > > on hosts: > ip addr > ovs-vsctl show > > The 'vdsm-tool ovn-config' command configures the ovn controller to use the > first ip as the ovn central, and the local tunnel to use the second one. > > Regards, > Marcin > > > On Sun, May 6, 2018 at 10:42 AM, Samuli Heinonen > wrote: > >> Hi all, >> >> I'm building a home lab using oVirt+GlusterFS in hyperconverged(ish) setup. >> >> My setup consists of 2x nodes with ASRock H110M-STX motherboard, Intel >> Pentium G4560 3,5 GHz CPU and 16 GB RAM. Motherboard has integrated Intel >> Gigabit I219V LAN. At the moment I'm using RaspberryPi as Gluster arbiter >> node. Nodes are connected to basic "desktop switch" without any management >> available. >> >> Hardware is nowhere near perfect, but it get its job done and is enough >> for playing around. However I'm having problems getting OVN to work >> properly and I'm clueless where to look next. >> >> oVirt is setup like this: >> oVirt engine host oe / 10.0.1.101 >> oVirt hypervisor host o2 / 10.0.1.18 >> oVirt hypervisor host o3 / 10.0.1.21 >> OVN network 10.0.200.0/24 >> >> When I spin up a VM in o2 and o3 with IP address in network 10.0.1.0/24 >> everything works fine. VMs can interact between each other without any >> problems. >> >> Problems show up when I try to use OVN based network between virtual >> machines. If virtual machines are on same hypervisor then everything seems >> to work ok. But if I have virtual machine on hypervisor o2 and another one >> on hypervisor o3 then TCP connections doesn't work very well. UDP seems to >> be ok and it's possible to ping hosts, do dns& ntp queries and so on. >> >> Problem with TCP is that for example when taking SSH connection to another >> host at some point connection just hangs and most of the time it's not even >> possible to even log in before connectiong hangs. If I look into tcpdump at >> that point it looks like packets never reach destination. Also, if I have >> multiple connections, then all of them hang at the same time. >> >> I have tried switching off tx checksum and other similar settings, but it >> didn't make any difference. >> >> I'm suspecting that hardware is not good enough. Before investigating into >> new hardware I'd like to get some confirmation that everything is setup >> correctly. >> >> When setting up oVirt/OVN I had to run following undocumented command to >> get it working at all: vdsm-tool ovn-config 10.0.1.101 10.0.1.21 (oVirt >> engine IP, hypervisor IP). Especially this makes me think that I have >> missed some crucial part in configuration. >> >> On oVirt engine in /var/log/openvswitch/ovsdb-server-nb.log there are >> error messages: >> 2018-05-06T08:30:05.418Z|00913|stream_ssl|WARN|SSL_read: unexpected SSL >> connection close >> 2018-05-06T08:30:05.418Z|00914|jsonrpc|WARN|ssl:127.0.0.1:53152: receive >> error: Protocol error >> 2018-05-06T08:30:05.419Z|00915|reconnect|WARN|ssl:127.0.0.1:53152: >> connection dropped (Protocol error) >> >> To be honest, I'm not sure what's causing those error messages or are they >> related. I found out some bug reports stating that they are not critical. >> >> Any ideas what to do next or should I just get better hardware? :) >> >> Best regards, >> Samuli Heinonen >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > From randyrue at gmail.com Mon May 7 18:43:51 2018 From: randyrue at gmail.com (Rue, Randy) Date: Mon, 7 May 2018 11:43:51 -0700 Subject: [ovirt-users] newbie questions on networking In-Reply-To: <20180504114421.7c9cad1d@t460p> References: <2ce0b234-5c6f-7dee-3479-21d2fc351f87@gmail.com> <9115b040-d035-d164-944b-e7091516c559@gmail.com> <20180504114421.7c9cad1d@t460p> Message-ID: I've sort of had some progress. On Friday I went to the dentist and when I returned, my VM could ping google. I don't believe I changed anything Friday morning but I confess I've been flailing on this for so long I'm not keeping detailed notes on what I change. And as I'm evaluating oVirt as a possible replacement for our production xencenter/xenserver systems, I need to know what was wrong and what fixed it. I reinstalled the ovirt-engine box and two hosts and started again. The only change I've made beyond the default is to remove the no-mac-spoofing filter from the ovirtmgmt vNIC profile so there are no filters applied. At this point I'm back to an ubuntu LTS server VM that again, is getting a DHCP IP address, nameserver entries in resolv.conf, and "route" shows correct local routing for addresses on the same subnet and the correct gateway for the rest of the world. The VM is even registering its hostname in our DNS correctly. And I can ping the static IP of the host the VM is on, but not the subnet gateway or anything in the real world. Two things I haven't mentioned that I haven't seen anything in the docs about. My ovirt-engine box is on a different subnet than my hosts, and my hosts are using a bonded pair of physical interfaces (XOR mode) for their single LAN connection. Did I miss something in the docs where these are a problem? Dominik, to answer your thoughts earlier: * name resolution isn't happening at all, the VM can't reach a DNS server * I don't manage the data center network gear but am pretty sure there's no configuration that blocks traffic. This is supported by my temporary success on Friday. And we also have other virtualization hosts (VMWare hosts) in the same subnet, that forward traffic to/from their VMs just fine. * tcpdump on the host's ovirtmgmt interface is pretty noisy but if I grep for the ubuntu DDNS name I see a slew of ARP requests. I can see pings to the host's IP address, and attempts to SSH from the VM to its host. Any attempt to touch anything past the host shows nothing on any interface in tcpdump, not a ping to the subnet gateway, not an SSH attempt, not a DNS query or a ping to known IP address. * hot damn, here's a clue! I can ping other oVirt hosts! (by IP only) I also tried pinging the ovirt-engine box, wasn't surprised when that failed as the VM would need to reach the gateway to get to the different subnet. So it appears that even though I've set up the ovirtmgmt network using defaults, and it has the "VM Network" option checked, my logical network is still set to only allow traffic between the VMs and hosts. What am I missing? -randy From jzygmont at proofpoint.com Mon May 7 19:16:46 2018 From: jzygmont at proofpoint.com (Justin Zygmont) Date: Mon, 7 May 2018 19:16:46 +0000 Subject: [ovirt-users] adding a host In-Reply-To: References: Message-ID: That?s what I did, and I tried everything. I might as well just wipe this all out and start over, its pretty disconcerning to know things can stuff up so bad there?s nothing that can be done. Even if I were to reinstall the host with the latest, the engine still sees the old node in there I can?t get rid of, so I?ll have to wipe out everything. From: Alona Kaplan [mailto:alkaplan at redhat.com] Sent: Sunday, May 6, 2018 11:25 PM To: Justin Zygmont Cc: Yanir Quinn ; users at ovirt.org Subject: Re: [ovirt-users] adding a host First restart the engine. Then try to change the required network to be non-required and click on RefreshCapabilities on the host. It should move the host to active state without the need to remove it. If it doesn't help, please attach your full engine.log and server.log. Anyway please update your ovirt to version 4.2.3.4, the bug was fixed there. Thanks, Alona. On Mon, May 7, 2018 at 7:30 AM, Justin Zygmont > wrote: I?ve deselect the required status, I will try to add it again if I am ever able to remove the old host. From: Alona Kaplan [mailto:alkaplan at redhat.com] Sent: Sunday, May 6, 2018 7:31 AM To: Yanir Quinn > Cc: Justin Zygmont >; users at ovirt.org Subject: Re: [ovirt-users] adding a host There was a bug when adding a host to a cluster that contains a required network. It was fixed in 4.2.3.4. Bug-Url- https://bugzilla.redhat.com/1570388 Thanks, Alona. On Sun, May 6, 2018 at 3:49 PM, Yanir Quinn > wrote: For removing the non operational host : 1.Right click on the host name 2.Click on "Confirm host has been rebooted" 3.Remove the host For the issue you are experiencing with host addition, according to the engine logs you have sent, you might need to perform a few steps , see: https://bugzilla.redhat.com/show_bug.cgi?id=1516256#c2 I would also recommend to check the the host's network is not down. Also, during installation of the host,observe the messages in the Events section (UI) Hope this helps. On Thu, May 3, 2018 at 10:07 PM, Justin Zygmont > wrote: I can?t seem to do anything to control the host from the engine, when I select it for Maint, the engine log shows: [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] (default task-50) [90ba81ef-21e4-4272-8c59-84786e969ff7] Running command: MaintenanceNumberOfVdssCommand internal: false. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2018-05-03 12:00:37,918-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (default task-50) [90ba81ef-21e4-4272-8c59-84786e969ff7] START, SetVdsStatusVDSCommand(HostName = ovnode102, SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', status='PreparingForMaintenance', nonOperationalReason='NONE', stopSpmFailureLogged='true', maintenanceReason='null'}), log id: 647d5f78 I have only 1 host in the DC, status is Up, the cluster says host count is 2 even though the second host stays Non Operational. I don?t know how to remove it. I just installed and tried to join the DC, this is a fresh installation, the engine was launched through cockpit. Heres what nodectl shows from the host: ovnode102 ~]# nodectl check Status: OK Bootloader ... OK Layer boot entries ... OK Valid boot entries ... OK Mount points ... OK Separate /var ... OK Discard is used ... OK Basic storage ... OK Initialized VG ... OK Initialized Thin Pool ... OK Initialized LVs ... OK Thin storage ... OK Checking available space in thinpool ... OK Checking thinpool auto-extend ... OK vdsmd ... OK Thanks, From: Yanir Quinn [mailto:yquinn at redhat.com] Sent: Thursday, May 3, 2018 1:19 AM To: Justin Zygmont > Cc: users at ovirt.org Subject: Re: [ovirt-users] adding a host Did you try switching the host to maintenance mode first ? What is the state of the data center and how many active hosts do you have now? And did you perform any updates recently or just run a fresh installation ? if so , did you run engine-setup before launching engine ? On Thu, May 3, 2018 at 12:47 AM, Justin Zygmont > wrote: I read this page and it doesn?t help since this is a host that can?t be removed, the ?remove? button is dimmed out. This is 4.22 ovirt node, but the host stays in a ?non operational? state. I notice the logs have a lot of errors, for example: the SERVER log: 2018-05-02 14:40:23,847-07 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (ForkJoinPool-1-worker-14) IJ000609: Attempt to return connection twice: org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null]: java.lang.Throwable: STACKTRACE at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:722) at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.returnConnection(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:611) at org.jboss.jca.core.connectionmanager.pool.AbstractPool.returnConnection(AbstractPool.java:847) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.returnManagedConnection(AbstractConnectionManager.java:725) at org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl.managedConnectionDisconnected(TxConnectionManagerImpl.java:585) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.disconnectManagedConnection(AbstractConnectionManager.java:988) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.reconnectManagedConnection(AbstractConnectionManager.java:974) at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:792) at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:138) at org.jboss.as.connector.subsystems.datasources.WildFlyDataSource.getConnection(WildFlyDataSource.java:64) at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:77) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.doExecute(PostgresDbEngineDialect.java:118) [dal.jar:] at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:135) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:105) [dal.jar:] at org.ovirt.engine.core.dao.VmDynamicDaoImpl.getAllRunningForVds(VmDynamicDaoImpl.java:52) [dal.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.isVmRunningOnHost(HostNetworkTopologyPersisterImpl.java:210) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.logChangedDisplayNetwork(HostNetworkTopologyPersisterImpl.java:179) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.auditNetworkCompliance(HostNetworkTopologyPersisterImpl.java:148) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.lambda$persistAndEnforceNetworkCompliance$0(HostNetworkTopologyPersisterImpl.java:100) [vdsbroker.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:202) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInRequired(TransactionSupport.java:137) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:105) [utils.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance(HostNetworkTopologyPersisterImpl.java:93) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.HostNetworkTopologyPersisterImpl.persistAndEnforceNetworkCompliance(HostNetworkTopologyPersisterImpl.java:154) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.processRefreshCapabilitiesResponse(VdsManager.java:794) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.handleRefreshCapabilitiesResponse(VdsManager.java:598) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.refreshHostSync(VdsManager.java:567) [vdsbroker.jar:] at org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand.executeCommand(RefreshHostCapabilitiesCommand.java:41) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1133) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1285) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1934) [bll.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103) [utils.jar:] at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1345) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:400) [bll.jar:] at org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:468) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:450) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:393) [bll.jar:] at sun.reflect.GeneratedMethodAccessor158.invoke(Unknown Source) [:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161] at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:78) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:88) at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:101) at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) [wildfly-ee-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:40) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53) at org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:264) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:379) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:244) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509) at org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438) at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:609) at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:57) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53) at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198) at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185) at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:81) at org.ovirt.engine.core.bll.interfaces.BackendInternal$$$view4.runInternalAction(Unknown Source) [bll.jar:] at sun.reflect.GeneratedMethodAccessor157.invoke(Unknown Source) [:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161] at org.jboss.weld.util.reflection.Reflections.invokeAndUnwrap(Reflections.java:433) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.EnterpriseBeanProxyMethodHandler.invoke(EnterpriseBeanProxyMethodHandler.java:127) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke(EnterpriseTargetBeanInstance.java:56) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.InjectionPointPropagatingEnterpriseTargetBeanInstance.invoke(InjectionPointPropagatingEnterpriseTargetBeanInstance.java:67) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:100) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at org.ovirt.engine.core.bll.BackendCommandObjectsHandler$BackendInternal$BackendLocal$2049259618$Proxy$_$$_Weld$EnterpriseProxy$.runInternalAction(Unknown Source) [bll.jar:] at org.ovirt.engine.core.bll.VdsEventListener.refreshHostCapabilities(VdsEventListener.java:598) [bll.jar:] at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext(HostConnectionRefresher.java:47) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.HostConnectionRefresher$SubscriberRefreshingHostOnHostConnectionChangeEvent.onNext(HostConnectionRefresher.java:30) [vdsbroker.jar:] at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$EventCallable.call(EventPublisher.java:118) [vdsm-jsonrpc-java-client.jar:] at org.ovirt.vdsm.jsonrpc.client.events.EventPublisher$EventCallable.call(EventPublisher.java:93) [vdsm-jsonrpc-java-client.jar:] at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) [rt.jar:1.8.0_161] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) [rt.jar:1.8.0_161] 2018-05-02 14:40:23,851-07 WARN [com.arjuna.ats.arjuna] (ForkJoinPool-1-worker-14) ARJUNA012077: Abort called on already aborted atomic action 0:ffff7f000001:-21bd8800:5ae90c48:10afa And the ENGINE log: 2018-05-02 14:40:23,851-07 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (ForkJoinPool-1-worker-14) [52276df5] transaction rolled back 2018-05-02 14:40:23,851-07 ERROR [org.ovirt.engine.core.vdsbroker.VdsManager] (ForkJoinPool-1-worker-14) [52276df5] Unable to RefreshCapabilities: IllegalStateException: Transaction Local transaction (delegate=TransactionImple < ac, BasicAction: 0:ffff7f000001:-21bd8800:5ae90c48:10afa status: ActionStatus.ABORTED >, owner=Local transaction context for provider JBoss JTA transaction provider) is not active STATUS_ROLLEDBACK 2018-05-02 14:40:23,888-07 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (ForkJoinPool-1-worker-14) [5c511e51] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:23,895-07 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:23,898-07 INFO [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Refresh host capabilities finished. Lock released. Monitoring can run now for host 'ovnode102 from data-center 'Default' 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Command 'org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand' failed: Could not get JDBC Connection; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000457: Unchecked throwable in managedConnectionReconnected() cl=org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null] 2018-05-02 14:40:23,898-07 ERROR [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Exception: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000457: Unchecked throwable in managedConnectionReconnected() cl=org.jboss.jca.core.connectionmanager.listener.TxConnectionListener at 1fab84d7[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection at 3f37cf10 connection handles=0 lastReturned=1525297223847 lastValidated=1525290267811 lastCheckedOut=1525296923770 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool at 20550f35 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool at 5baa90f[pool=ENGINEDataSource] xaResource=LocalXAResourceImpl at 24a7fc0b[connectionListener=1fab84d7 connectionManager=5bec70d2 warned=false currentXid=null productName=PostgreSQL productVersion=9.5.9 jndiName=java:/ENGINEDataSource] txSync=null] at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:619) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716) [spring-jdbc.jar:4.3.9.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:766) [spring-jdbc.jar:4.3.9.RELEASE] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:152) [dal.jar:] . . . . 2018-05-02 14:40:23,907-07 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-14) [2a0ec90b] EVENT_ID: HOST_REFRESH_CAPABILITIES_FAILED(607), Failed to refresh the capabilities of host ovnode102. 2018-05-02 14:40:23,907-07 INFO [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-14) [2a0ec90b] Lock freed to object 'EngineLock:{exclusiveLocks='[74dfe965-cb11-495a-96a0-3dae6b3cbd75=VDS, HOST_NETWORK74dfe965-cb11-495a-96a0-3dae6b3cbd75=HOST_NETWORK]', sharedLocks=''}' 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] START, GetHardwareInfoAsyncVDSCommand(HostName = ovnode102, VdsIdAndVdsVDSCommandParametersBase:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', vds='Host[ovnode102,74dfe965-cb11-495a-96a0-3dae6b3cbd75]'}), log id: 300f7345 2018-05-02 14:40:25,775-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [] FINISH, GetHardwareInfoAsyncVDSCommand, log id: 300f7345 2018-05-02 14:40:25,802-07 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 74dfe965-cb11-495a-96a0-3dae6b3cbd75 Type: VDS 2018-05-02 14:40:25,805-07 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [f2ef21e] START, SetVdsStatusVDSCommand(HostName = ovnode102., SetVdsStatusVDSCommandParameters:{hostId='74dfe965-cb11-495a-96a0-3dae6b3cbd75', status='NonOperational', nonOperationalReason='NETWORK_UNREACHABLE', stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 7611d8d8 2018-05-02 14:40:56,722-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' 2018-05-02 14:40:56,732-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Running command: SyncNetworkProviderCommand internal: true. 2018-05-02 14:40:56,844-07 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-40) [] User admin at internal successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2018-05-02 14:40:57,001-07 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [33bdda7f] Lock freed to object 'EngineLock:{exclusiveLocks='[f50bd081-7c5b-4161-a045-068f85d2a476=PROVIDER]', sharedLocks=''}' 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'default' is using 0 threads out of 1 and 5 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 8 threads waiting for tasks and 0 tasks in queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineScheduled' is using 1 threads out of 100 and 99 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1 and 0 tasks are waiting in the queue. 2018-05-02 14:44:39,191-07 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5 and 1 tasks are waiting in the queue. From: Yanir Quinn [mailto:yquinn at redhat.com] Sent: Wednesday, May 2, 2018 12:34 AM To: Justin Zygmont > Cc: users at ovirt.org Subject: Re: [ovirt-users] adding a host Hi, What document are you using ? See if you find the needed information here : https://ovirt.org/documentation/admin-guide/chap-Hosts/ For engine related potential errors i recommend also checking the engine.log and in UI check the events section. Regards, Yanir Quinn On Tue, May 1, 2018 at 11:11 PM, Justin Zygmont > wrote: I have tried to add a host to the engine and it just takes forever never working or giving any error message. When I look in the engine?s server.log I see it says the networks are missing. I thought when you install a node and add it to the engine it will add the networks automatically? The docs don?t give much information about this, and I can?t even remove the host through the UI. What steps are required to prepare a node when several vlans are involved? _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From ab.zanni at numea.ma Mon May 7 15:50:09 2018 From: ab.zanni at numea.ma (Abdelkarim ZANNI) Date: Mon, 7 May 2018 16:50:09 +0100 Subject: [ovirt-users] dracut-initqueue[488]: Warning: Could not boot. Message-ID: <4963c7fe-dc86-6a55-2429-d13c9eb44070@numea.ma> Hello Ovirt users, i'm trying to install ovirt node on a Dell server using a bootable usb. After i click install ovirt, the following message is displayed on the screen; /*dracut-initqueue[488]: Warning: Could not boot*. / I tried different version of Ovirt node without success, can anyone point me to how to sort this out ? Thank you in advance/. / -- Abdelkarim ZANNI GSM: +212671644088 NUMEA | Rabat - Morocco -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholler at redhat.com Mon May 7 20:58:45 2018 From: dholler at redhat.com (Dominik Holler) Date: Mon, 7 May 2018 22:58:45 +0200 Subject: [ovirt-users] newbie questions on networking In-Reply-To: References: <2ce0b234-5c6f-7dee-3479-21d2fc351f87@gmail.com> <9115b040-d035-d164-944b-e7091516c559@gmail.com> <20180504114421.7c9cad1d@t460p> Message-ID: <20180507225845.2f075f2c@t460p> On Mon, 7 May 2018 11:43:51 -0700 "Rue, Randy" wrote: > I've sort of had some progress. On Friday I went to the dentist and > when I returned, my VM could ping google. > > I don't believe I changed anything Friday morning but I confess I've > been flailing on this for so long I'm not keeping detailed notes on > what I change. And as I'm evaluating oVirt as a possible replacement > for our production xencenter/xenserver systems, I need to know what > was wrong and what fixed it. > > I reinstalled the ovirt-engine box and two hosts and started again. > The only change I've made beyond the default is to remove the > no-mac-spoofing filter from the ovirtmgmt vNIC profile so there are > no filters applied. At this point I'm back to an ubuntu LTS server VM > that again, is getting a DHCP IP address, nameserver entries in > resolv.conf, and "route" shows correct local routing for addresses on > the same subnet and the correct gateway for the rest of the world. > The VM is even registering its hostname in our DNS correctly. And I > can ping the static IP of the host the VM is on, but not the subnet > gateway or anything in the real world. > Can you ping the DHCP server? > Two things I haven't mentioned that I haven't seen anything in the > docs about. My ovirt-engine box is on a different subnet than my > hosts, and my hosts are using a bonded pair of physical interfaces > (XOR mode) for their single LAN connection. Was the bond created before adding the hosts to oVirt, or after adding the hosts via oVirt web UI? If the switch requires configuration for the bond, is this applied? Can you check if the VM can ping the getaway, if you use a simple Ethernet connection instead of the bond? > Did I miss something in the docs where these are a problem? > > Dominik, to answer your thoughts earlier: > > * name resolution isn't happening at all, the VM can't reach a DNS > server > > * I don't manage the data center network gear but am pretty sure > there's no configuration that blocks traffic. This is supported by my > temporary success on Friday. And we also have other virtualization > hosts (VMWare hosts) in the same subnet, that forward traffic to/from > their VMs just fine. > OK, L3 seems to work now sometimes. > * tcpdump on the host's ovirtmgmt interface is pretty noisy but if I > grep for the ubuntu DDNS name I see a slew of ARP requests. I can see > pings to the host's IP address, and attempts to SSH from the VM to > its host. Any attempt to touch anything past the host shows nothing > on any interface in tcpdump, not a ping to the subnet gateway, not an > SSH attempt, not a DNS query or a ping to known IP address. > The outgoing ARP requests looks like the traffic of the VM is forwarded to ovirtmgmt. Do you see ARP reply to the VM? Maybe the VM fails to get the MAC address of the gateway. > * hot damn, here's a clue! I can ping other oVirt hosts! (by IP only) > I also tried pinging the ovirt-engine box, wasn't surprised when that > failed as the VM would need to reach the gateway to get to the > different subnet. > > So it appears that even though I've set up the ovirtmgmt network > using defaults, and it has the "VM Network" option checked, my > logical network is still set to only allow traffic between the VMs > and hosts. > > What am I missing? > > -randy From cboggio at inlinenetworks.com Mon May 7 21:03:31 2018 From: cboggio at inlinenetworks.com (Clint Boggio) Date: Mon, 7 May 2018 21:03:31 +0000 Subject: [ovirt-users] newbie questions on networking In-Reply-To: <20180507225845.2f075f2c@t460p> References: <2ce0b234-5c6f-7dee-3479-21d2fc351f87@gmail.com> <9115b040-d035-d164-944b-e7091516c559@gmail.com> <20180504114421.7c9cad1d@t460p> , <20180507225845.2f075f2c@t460p> Message-ID: <6C83F13B-0F5D-4C3A-B14F-AE5F2B8CBCC2@inlinenetworks.com> Randy this flaky layer two problem reeks of a possible MTU situation between your oVirt switches and your physical switches. > On May 7, 2018, at 3:59 PM, Dominik Holler wrote: > > On Mon, 7 May 2018 11:43:51 -0700 > "Rue, Randy" wrote: > >> I've sort of had some progress. On Friday I went to the dentist and >> when I returned, my VM could ping google. >> >> I don't believe I changed anything Friday morning but I confess I've >> been flailing on this for so long I'm not keeping detailed notes on >> what I change. And as I'm evaluating oVirt as a possible replacement >> for our production xencenter/xenserver systems, I need to know what >> was wrong and what fixed it. >> >> I reinstalled the ovirt-engine box and two hosts and started again. >> The only change I've made beyond the default is to remove the >> no-mac-spoofing filter from the ovirtmgmt vNIC profile so there are >> no filters applied. At this point I'm back to an ubuntu LTS server VM >> that again, is getting a DHCP IP address, nameserver entries in >> resolv.conf, and "route" shows correct local routing for addresses on >> the same subnet and the correct gateway for the rest of the world. >> The VM is even registering its hostname in our DNS correctly. And I >> can ping the static IP of the host the VM is on, but not the subnet >> gateway or anything in the real world. >> > > Can you ping the DHCP server? > >> Two things I haven't mentioned that I haven't seen anything in the >> docs about. My ovirt-engine box is on a different subnet than my >> hosts, and my hosts are using a bonded pair of physical interfaces >> (XOR mode) for their single LAN connection. > > Was the bond created before adding the hosts to oVirt, or after adding > the hosts via oVirt web UI? > If the switch requires configuration for the bond, is this applied? > Can you check if the VM can ping the getaway, if you use a simple > Ethernet connection instead of the bond? > >> Did I miss something in the docs where these are a problem? >> >> Dominik, to answer your thoughts earlier: >> >> * name resolution isn't happening at all, the VM can't reach a DNS >> server >> >> * I don't manage the data center network gear but am pretty sure >> there's no configuration that blocks traffic. This is supported by my >> temporary success on Friday. And we also have other virtualization >> hosts (VMWare hosts) in the same subnet, that forward traffic to/from >> their VMs just fine. >> > > OK, L3 seems to work now sometimes. > >> * tcpdump on the host's ovirtmgmt interface is pretty noisy but if I >> grep for the ubuntu DDNS name I see a slew of ARP requests. I can see >> pings to the host's IP address, and attempts to SSH from the VM to >> its host. Any attempt to touch anything past the host shows nothing >> on any interface in tcpdump, not a ping to the subnet gateway, not an >> SSH attempt, not a DNS query or a ping to known IP address. >> > > The outgoing ARP requests looks like the traffic of the VM is forwarded > to ovirtmgmt. > Do you see ARP reply to the VM? > Maybe the VM fails to get the MAC address of the gateway. > >> * hot damn, here's a clue! I can ping other oVirt hosts! (by IP only) >> I also tried pinging the ovirt-engine box, wasn't surprised when that >> failed as the VM would need to reach the gateway to get to the >> different subnet. >> >> So it appears that even though I've set up the ovirtmgmt network >> using defaults, and it has the "VM Network" option checked, my >> logical network is still set to only allow traffic between the VMs >> and hosts. >> >> What am I missing? >> >> -randy > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From clam2718 at gmail.com Mon May 7 21:46:57 2018 From: clam2718 at gmail.com (Charles Lam) Date: Mon, 07 May 2018 21:46:57 +0000 Subject: [ovirt-users] dracut-initqueue[488]: Warning: Could not boot. In-Reply-To: <4963c7fe-dc86-6a55-2429-d13c9eb44070@numea.ma> References: <4963c7fe-dc86-6a55-2429-d13c9eb44070@numea.ma> Message-ID: Dear Mr. Zanni: I have had what I believe to be similar issues. I am in no way an expert or even knowledgeable, but from experience I have found this to work: dd if=/tmp/ovirt-node-ng-installer-ovirt-4.2-2018050417.iso of=/dev/sdb This command assumes that you are on CentOS or similar; assumes that your USB stick is at "/dev/sdb"; assumes that you have placed the ISO you want to image the USB stick with at "/tmp/"; and assumes that the name of your ISO is " ovirt-node-ng-installer-ovirt-4.2-2018050417.iso" (most likely it will be something different) Again, I am not that knowledgeable, but I have not found the directions on the oVirt website imaging Node to a USB stick to work for me, nor have I had success (with Node only) with the usually great Rufus. Sincerely, Charles On Mon, May 7, 2018 at 4:37 PM Abdelkarim ZANNI wrote: > Hello Ovirt users, > > > i'm trying to install ovirt node on a Dell server using a bootable usb. > > After i click install ovirt, the following message is displayed on the > screen; > > *dracut-initqueue[488]: Warning: Could not boot. > * > I tried different version of Ovirt node without success, can anyone point me to how to sort this out ? > > Thank you in advance*. > > * > > -- > Abdelkarim ZANNI > GSM: +212671644088 > NUMEA | Rabat - Morocco > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jzygmont at proofpoint.com Mon May 7 22:44:08 2018 From: jzygmont at proofpoint.com (Justin Zygmont) Date: Mon, 7 May 2018 22:44:08 +0000 Subject: [ovirt-users] newbie questions on networking In-Reply-To: <20180507225845.2f075f2c@t460p> References: <2ce0b234-5c6f-7dee-3479-21d2fc351f87@gmail.com> <9115b040-d035-d164-944b-e7091516c559@gmail.com> <20180504114421.7c9cad1d@t460p> <20180507225845.2f075f2c@t460p> Message-ID: >Was the bond created before adding the hosts to oVirt, or after adding the hosts via oVirt web UI? >If the switch requires configuration for the bond, is this applied? >Can you check if the VM can ping the getaway, if you use a simple Ethernet connection instead of the >bond? Should any of this be done before adding the host to oVirt? From randyrue at gmail.com Mon May 7 22:45:27 2018 From: randyrue at gmail.com (Rue, Randy) Date: Mon, 7 May 2018 15:45:27 -0700 Subject: [ovirt-users] newbie questions on networking In-Reply-To: <6C83F13B-0F5D-4C3A-B14F-AE5F2B8CBCC2@inlinenetworks.com> References: <2ce0b234-5c6f-7dee-3479-21d2fc351f87@gmail.com> <9115b040-d035-d164-944b-e7091516c559@gmail.com> <20180504114421.7c9cad1d@t460p> <20180507225845.2f075f2c@t460p> <6C83F13B-0F5D-4C3A-B14F-AE5F2B8CBCC2@inlinenetworks.com> Message-ID: <8e9a3779-7057-36bf-8ef9-3f2cb2b2e694@gmail.com> Looks like the physical interface on the host and the virtual interface on the VM are both at the default 1500 MTU. How can I determine the MTU setting for the physical switches without admin access to them? Or do I need to ask the network team? On 5/7/2018 2:03 PM, Clint Boggio wrote: > Randy this flaky layer two problem reeks of a possible MTU situation between your oVirt switches and your physical switches. > >> On May 7, 2018, at 3:59 PM, Dominik Holler wrote: >> >> On Mon, 7 May 2018 11:43:51 -0700 >> "Rue, Randy" wrote: >> >>> I've sort of had some progress. On Friday I went to the dentist and >>> when I returned, my VM could ping google. >>> >>> I don't believe I changed anything Friday morning but I confess I've >>> been flailing on this for so long I'm not keeping detailed notes on >>> what I change. And as I'm evaluating oVirt as a possible replacement >>> for our production xencenter/xenserver systems, I need to know what >>> was wrong and what fixed it. >>> >>> I reinstalled the ovirt-engine box and two hosts and started again. >>> The only change I've made beyond the default is to remove the >>> no-mac-spoofing filter from the ovirtmgmt vNIC profile so there are >>> no filters applied. At this point I'm back to an ubuntu LTS server VM >>> that again, is getting a DHCP IP address, nameserver entries in >>> resolv.conf, and "route" shows correct local routing for addresses on >>> the same subnet and the correct gateway for the rest of the world. >>> The VM is even registering its hostname in our DNS correctly. And I >>> can ping the static IP of the host the VM is on, but not the subnet >>> gateway or anything in the real world. >>> >> Can you ping the DHCP server? >> >>> Two things I haven't mentioned that I haven't seen anything in the >>> docs about. My ovirt-engine box is on a different subnet than my >>> hosts, and my hosts are using a bonded pair of physical interfaces >>> (XOR mode) for their single LAN connection. >> Was the bond created before adding the hosts to oVirt, or after adding >> the hosts via oVirt web UI? >> If the switch requires configuration for the bond, is this applied? >> Can you check if the VM can ping the getaway, if you use a simple >> Ethernet connection instead of the bond? >> >>> Did I miss something in the docs where these are a problem? >>> >>> Dominik, to answer your thoughts earlier: >>> >>> * name resolution isn't happening at all, the VM can't reach a DNS >>> server >>> >>> * I don't manage the data center network gear but am pretty sure >>> there's no configuration that blocks traffic. This is supported by my >>> temporary success on Friday. And we also have other virtualization >>> hosts (VMWare hosts) in the same subnet, that forward traffic to/from >>> their VMs just fine. >>> >> OK, L3 seems to work now sometimes. >> >>> * tcpdump on the host's ovirtmgmt interface is pretty noisy but if I >>> grep for the ubuntu DDNS name I see a slew of ARP requests. I can see >>> pings to the host's IP address, and attempts to SSH from the VM to >>> its host. Any attempt to touch anything past the host shows nothing >>> on any interface in tcpdump, not a ping to the subnet gateway, not an >>> SSH attempt, not a DNS query or a ping to known IP address. >>> >> The outgoing ARP requests looks like the traffic of the VM is forwarded >> to ovirtmgmt. >> Do you see ARP reply to the VM? >> Maybe the VM fails to get the MAC address of the gateway. >> >>> * hot damn, here's a clue! I can ping other oVirt hosts! (by IP only) >>> I also tried pinging the ovirt-engine box, wasn't surprised when that >>> failed as the VM would need to reach the gateway to get to the >>> different subnet. >>> >>> So it appears that even though I've set up the ovirtmgmt network >>> using defaults, and it has the "VM Network" option checked, my >>> logical network is still set to only allow traffic between the VMs >>> and hosts. >>> >>> What am I missing? >>> >>> -randy >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> From jzygmont at proofpoint.com Mon May 7 22:55:33 2018 From: jzygmont at proofpoint.com (Justin Zygmont) Date: Mon, 7 May 2018 22:55:33 +0000 Subject: [ovirt-users] dracut-initqueue[488]: Warning: Could not boot. In-Reply-To: <4963c7fe-dc86-6a55-2429-d13c9eb44070@numea.ma> References: <4963c7fe-dc86-6a55-2429-d13c9eb44070@numea.ma> Message-ID: There must be more error messages than that? From: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] On Behalf Of Abdelkarim ZANNI Sent: Monday, May 7, 2018 8:50 AM To: Users at ovirt.org Subject: [ovirt-users] dracut-initqueue[488]: Warning: Could not boot. Hello Ovirt users, i'm trying to install ovirt node on a Dell server using a bootable usb. After i click install ovirt, the following message is displayed on the screen; dracut-initqueue[488]: Warning: Could not boot. I tried different version of Ovirt node without success, can anyone point me to how to sort this out ? Thank you in advance. -- Abdelkarim ZANNI GSM: +212671644088 NUMEA | Rabat - Morocco -------------- next part -------------- An HTML attachment was scrubbed... URL: From randyrue at gmail.com Mon May 7 22:59:54 2018 From: randyrue at gmail.com (Rue, Randy) Date: Mon, 7 May 2018 15:59:54 -0700 Subject: [ovirt-users] newbie questions on networking In-Reply-To: <20180507225845.2f075f2c@t460p> References: <2ce0b234-5c6f-7dee-3479-21d2fc351f87@gmail.com> <9115b040-d035-d164-944b-e7091516c559@gmail.com> <20180504114421.7c9cad1d@t460p> <20180507225845.2f075f2c@t460p> Message-ID: I installed the ovirt node to standalone interfaces, then created the bond via the ovirt-node webui at port 9090, before adding the node to the cluster. The DHCP server happens to be in the same subnet but no, I can't ping it as I can't ping anything beyond the physical interfaces of the hosts. I've added a third host and can also ping that from the VM on node 1. For a hoot also spun up a new CentOS VM in case this was an OS problem. Same results. And when the two VMs are on different hosts, they can't ping each other. When I migrate one so they're both on the same host, they can each ping each other. On 5/7/2018 1:58 PM, Dominik Holler wrote: > On Mon, 7 May 2018 11:43:51 -0700 > "Rue, Randy" wrote: > >> I've sort of had some progress. On Friday I went to the dentist and >> when I returned, my VM could ping google. >> >> I don't believe I changed anything Friday morning but I confess I've >> been flailing on this for so long I'm not keeping detailed notes on >> what I change. And as I'm evaluating oVirt as a possible replacement >> for our production xencenter/xenserver systems, I need to know what >> was wrong and what fixed it. >> >> I reinstalled the ovirt-engine box and two hosts and started again. >> The only change I've made beyond the default is to remove the >> no-mac-spoofing filter from the ovirtmgmt vNIC profile so there are >> no filters applied. At this point I'm back to an ubuntu LTS server VM >> that again, is getting a DHCP IP address, nameserver entries in >> resolv.conf, and "route" shows correct local routing for addresses on >> the same subnet and the correct gateway for the rest of the world. >> The VM is even registering its hostname in our DNS correctly. And I >> can ping the static IP of the host the VM is on, but not the subnet >> gateway or anything in the real world. >> > Can you ping the DHCP server? > >> Two things I haven't mentioned that I haven't seen anything in the >> docs about. My ovirt-engine box is on a different subnet than my >> hosts, and my hosts are using a bonded pair of physical interfaces >> (XOR mode) for their single LAN connection. > Was the bond created before adding the hosts to oVirt, or after adding > the hosts via oVirt web UI? > If the switch requires configuration for the bond, is this applied? > Can you check if the VM can ping the getaway, if you use a simple > Ethernet connection instead of the bond? > >> Did I miss something in the docs where these are a problem? >> >> Dominik, to answer your thoughts earlier: >> >> * name resolution isn't happening at all, the VM can't reach a DNS >> server >> >> * I don't manage the data center network gear but am pretty sure >> there's no configuration that blocks traffic. This is supported by my >> temporary success on Friday. And we also have other virtualization >> hosts (VMWare hosts) in the same subnet, that forward traffic to/from >> their VMs just fine. >> > OK, L3 seems to work now sometimes. > >> * tcpdump on the host's ovirtmgmt interface is pretty noisy but if I >> grep for the ubuntu DDNS name I see a slew of ARP requests. I can see >> pings to the host's IP address, and attempts to SSH from the VM to >> its host. Any attempt to touch anything past the host shows nothing >> on any interface in tcpdump, not a ping to the subnet gateway, not an >> SSH attempt, not a DNS query or a ping to known IP address. >> > The outgoing ARP requests looks like the traffic of the VM is forwarded > to ovirtmgmt. > Do you see ARP reply to the VM? > Maybe the VM fails to get the MAC address of the gateway. > >> * hot damn, here's a clue! I can ping other oVirt hosts! (by IP only) >> I also tried pinging the ovirt-engine box, wasn't surprised when that >> failed as the VM would need to reach the gateway to get to the >> different subnet. >> >> So it appears that even though I've set up the ovirtmgmt network >> using defaults, and it has the "VM Network" option checked, my >> logical network is still set to only allow traffic between the VMs >> and hosts. >> >> What am I missing? >> >> -randy From cboggio at inlinenetworks.com Mon May 7 23:03:34 2018 From: cboggio at inlinenetworks.com (Clint Boggio) Date: Mon, 7 May 2018 23:03:34 +0000 Subject: [ovirt-users] newbie questions on networking In-Reply-To: <8e9a3779-7057-36bf-8ef9-3f2cb2b2e694@gmail.com> References: <2ce0b234-5c6f-7dee-3479-21d2fc351f87@gmail.com> <9115b040-d035-d164-944b-e7091516c559@gmail.com> <20180504114421.7c9cad1d@t460p> <20180507225845.2f075f2c@t460p> <6C83F13B-0F5D-4C3A-B14F-AE5F2B8CBCC2@inlinenetworks.com>, <8e9a3779-7057-36bf-8ef9-3f2cb2b2e694@gmail.com> Message-ID: You should query the network to team for sure. You should also use use the ping command (from the VM) to troubleshoot possible MTU problems getting to your infrastructure DNS servers and gateway. # ping -M do -s 1472 xxx.xxx.xxx.xxx Starting at 1472 and ?walking? the -s up and down will help you determine your situation. If at all possible, you could take the production switches and gateway out of the equation with your own isolated gear. > On May 7, 2018, at 5:45 PM, Rue, Randy wrote: > > Looks like the physical interface on the host and the virtual interface on the VM are both at the default 1500 MTU. > > How can I determine the MTU setting for the physical switches without admin access to them? Or do I need to ask the network team? > > > >> On 5/7/2018 2:03 PM, Clint Boggio wrote: >> Randy this flaky layer two problem reeks of a possible MTU situation between your oVirt switches and your physical switches. >> >>> On May 7, 2018, at 3:59 PM, Dominik Holler wrote: >>> >>> On Mon, 7 May 2018 11:43:51 -0700 >>> "Rue, Randy" wrote: >>> >>>> I've sort of had some progress. On Friday I went to the dentist and >>>> when I returned, my VM could ping google. >>>> >>>> I don't believe I changed anything Friday morning but I confess I've >>>> been flailing on this for so long I'm not keeping detailed notes on >>>> what I change. And as I'm evaluating oVirt as a possible replacement >>>> for our production xencenter/xenserver systems, I need to know what >>>> was wrong and what fixed it. >>>> >>>> I reinstalled the ovirt-engine box and two hosts and started again. >>>> The only change I've made beyond the default is to remove the >>>> no-mac-spoofing filter from the ovirtmgmt vNIC profile so there are >>>> no filters applied. At this point I'm back to an ubuntu LTS server VM >>>> that again, is getting a DHCP IP address, nameserver entries in >>>> resolv.conf, and "route" shows correct local routing for addresses on >>>> the same subnet and the correct gateway for the rest of the world. >>>> The VM is even registering its hostname in our DNS correctly. And I >>>> can ping the static IP of the host the VM is on, but not the subnet >>>> gateway or anything in the real world. >>>> >>> Can you ping the DHCP server? >>> >>>> Two things I haven't mentioned that I haven't seen anything in the >>>> docs about. My ovirt-engine box is on a different subnet than my >>>> hosts, and my hosts are using a bonded pair of physical interfaces >>>> (XOR mode) for their single LAN connection. >>> Was the bond created before adding the hosts to oVirt, or after adding >>> the hosts via oVirt web UI? >>> If the switch requires configuration for the bond, is this applied? >>> Can you check if the VM can ping the getaway, if you use a simple >>> Ethernet connection instead of the bond? >>> >>>> Did I miss something in the docs where these are a problem? >>>> >>>> Dominik, to answer your thoughts earlier: >>>> >>>> * name resolution isn't happening at all, the VM can't reach a DNS >>>> server >>>> >>>> * I don't manage the data center network gear but am pretty sure >>>> there's no configuration that blocks traffic. This is supported by my >>>> temporary success on Friday. And we also have other virtualization >>>> hosts (VMWare hosts) in the same subnet, that forward traffic to/from >>>> their VMs just fine. >>>> >>> OK, L3 seems to work now sometimes. >>> >>>> * tcpdump on the host's ovirtmgmt interface is pretty noisy but if I >>>> grep for the ubuntu DDNS name I see a slew of ARP requests. I can see >>>> pings to the host's IP address, and attempts to SSH from the VM to >>>> its host. Any attempt to touch anything past the host shows nothing >>>> on any interface in tcpdump, not a ping to the subnet gateway, not an >>>> SSH attempt, not a DNS query or a ping to known IP address. >>>> >>> The outgoing ARP requests looks like the traffic of the VM is forwarded >>> to ovirtmgmt. >>> Do you see ARP reply to the VM? >>> Maybe the VM fails to get the MAC address of the gateway. >>> >>>> * hot damn, here's a clue! I can ping other oVirt hosts! (by IP only) >>>> I also tried pinging the ovirt-engine box, wasn't surprised when that >>>> failed as the VM would need to reach the gateway to get to the >>>> different subnet. >>>> >>>> So it appears that even though I've set up the ovirtmgmt network >>>> using defaults, and it has the "VM Network" option checked, my >>>> logical network is still set to only allow traffic between the VMs >>>> and hosts. >>>> >>>> What am I missing? >>>> >>>> -randy >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> > >