
Hi, maybe it was already here, but I haven't found it quickly in archive ;( I upgrade the hosted engine and one hots, my notes and questions. The upgrade goes well, I only needed to manually fix the memy value in database for hosted engine [ ERROR ] schema.sh: FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql [ ERROR ] Failed to execute stage 'Misc configuration': Engine schema refresh failed 1) firewalld after upgrade the hot server, the i needed to stop firewalld. It seems, that, the rules are not generated correctly. The engine was not able to connect to the host. How do I could fix it? 2) old repo removal Could i remove the 4.1 repo? If yes, what is the best way to do that? 3) Hosted Engine HA: Hosted Engine HA on upgraded hosts is 3400, the same as on the 4.1 hosts. Is this good or bad? regards Peter -- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> *CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>

On Tue, Jan 9, 2018 at 10:13 AM, Peter Hudec <phudec@cnc.sk> wrote:
Hi,
maybe it was already here, but I haven't found it quickly in archive ;(
I upgrade the hosted engine and one hots, my notes and questions. The upgrade goes well, I only needed to manually fix the memy value in database for hosted engine
[ ERROR ] schema.sh: FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_ 02_0140_add_max_memory_constraint.sql [ ERROR ] Failed to execute stage 'Misc configuration': Engine schema refresh failed
You mean you changed something in the db and then ran engine-setup again and it worked? Can you please share both setup logs and the change you made in between? Thanks.
1) firewalld after upgrade the hot server, the i needed to stop firewalld. It seems, that, the rules are not generated correctly. The engine was not able to connect to the host. How do I could fix it?
Please check/share relevant files from /var/log/ovirt-engine/ansible/ and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and attach them there.
2) old repo removal Could i remove the 4.1 repo? If yes, what is the best way to do that?
I think 'yum remove ovirt-release41', or remove the relevant files in /etc/yum.repos.d.
3) Hosted Engine HA: Hosted Engine HA on upgraded hosts is 3400, the same as on the 4.1 hosts. Is this good or bad?
It's good. Best regards,
regards Peter
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk>
*CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100
Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi

I follow this thread http://users.ovirt.narkive.com/HR6UmRcn/ovirt-users-ovirt-4-2-alpha-upgrade-... select vm_name from vm_static where mem_size_mb > max_memory_size_mb; begin; update vm_static set max_memory_size_mb = mem_size_mb where mem_size_mb
max_memory_size_mb
In my case it was only HostedEngine. After that I rereun the setup again. See the log file from failed upgrade, look for ``` Running upgrade sql script '/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql'... 2018-01-09 08:07:02,091+0100 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:926 execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log/ovirt-engine/setup/ovirt-engine-setup-20180109080109-ftui7e.log', '-c', 'apply'] stderr: psql:/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql:2: ERROR: check constraint "vm_static_max_memory_size_lower_bound" is violated by some row FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql 2018-01-09 08:07:02,092+0100 ERROR otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema schema._misc:374 schema.sh: FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql ``` On 09/01/2018 09:35, Yedidyah Bar David wrote:
Hi,
maybe it was already here, but I haven't found it quickly in archive ;(
I upgrade the hosted engine and one hots, my notes and questions. The upgrade goes well, I only needed to manually fix the memy value in database for hosted engine
[ ERROR ] schema.sh: FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql [ ERROR ] Failed to execute stage 'Misc configuration': Engine schema refresh failed
You mean you changed something in the db and then ran engine-setup again and it worked?
Can you please share both setup logs and the change you made in between? Thanks.
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> *CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>

I follow this thread http://users.ovirt.narkive.com/HR6UmRcn/ovirt-users-ovirt-4-2-alpha-upgrade-... select vm_name from vm_static where mem_size_mb > max_memory_size_mb; begin; update vm_static set max_memory_size_mb = mem_size_mb where mem_size_mb
max_memory_size_mb
In my case it was only HostedEngine. After that I rereun the setup again. See the log file from failed upgrade, look for ``` Running upgrade sql script '/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql'... 2018-01-09 08:07:02,091+0100 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:926 execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log/ovirt-engine/setup/ovirt-engine-setup-20180109080109-ftui7e.log', '-c', 'apply'] stderr: psql:/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql:2: ERROR: check constraint "vm_static_max_memory_size_lower_bound" is violated by some row FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql 2018-01-09 08:07:02,092+0100 ERROR otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema schema._misc:374 schema.sh: FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql ``` On 09/01/2018 09:35, Yedidyah Bar David wrote:
Hi,
maybe it was already here, but I haven't found it quickly in archive ;(
I upgrade the hosted engine and one hots, my notes and questions. The upgrade goes well, I only needed to manually fix the memy value in database for hosted engine
[ ERROR ] schema.sh: FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql [ ERROR ] Failed to execute stage 'Misc configuration': Engine schema refresh failed
You mean you changed something in the db and then ran engine-setup again and it worked?
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> *CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>

The HA is flapping between 3400 nad 0. ;( And I'm not able to migrate also any other Vm to this host. Loggs fromthe /var/log/ovirt-hosted-engine-ha/agent.log file MainThread::INFO::2018-01-08 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=8232312 (Mon Jan 8 21:44:29 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=8232316 (Mon Jan 8 21:44:33 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n', 'hostname': 'dipovirt03.cnc.sk', 'host-id': 1, 'engine-status': {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': 'f28d4648', 'local_conf_timestamp': 8232316, 'host-ts': 8232312} MainThread::INFO::2018-01-08 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host dipovirt02.cnc.sk (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetada...skipping... neVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/3981424d-a55c-4f07-bff2-aca316a95d1f/3513775f-d6b0-4423-be19-bbe b79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778 MainThread::INFO::2018-01-09 10:15:13,904::state_machine::169::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Global metadata: {'maintenance': False} MainThread::INFO::2018-01-09 10:15:13,905::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=38598 (Tue Jan 9 09:23:33 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=38598 (Tue Jan 9 09:23:34 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n', 'hostname': 'dipovirt03.cnc.sk', 'alive': False, 'host-id': 1, 'engine-status': {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '4c1d1890', 'local_conf_timestamp': 38598, 'host-ts': 38598} MainThread::INFO::2018-01-09 10:15:13,905::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host dipovirt02.cnc.sk (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=40677 (Tue Jan 9 09:24:11 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=40677 (Tue Jan 9 09:24:11 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n', 'hostname': 'dipovirt02.cnc.sk', 'alive': False, 'host-id': 3, 'engine-status': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '3bf104bc', 'local_conf_timestamp': 40677, 'host-ts': 40677} MainThread::INFO::2018-01-09 10:15:13,905::state_machine::177::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Local (id 2): {'engine-health': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge': True, 'mem-free': 39540.0, 'maintenance': False, 'cpu-load': 0.0432, 'gateway': 1.0, 'storage-domain': True} MainThread::INFO::2018-01-09 10:15:13,905::states::775::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Another host already took over.. MainThread::INFO::2018-01-09 10:15:13,928::state_decorators::88::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineForceStop'> MainThread::INFO::2018-01-09 10:15:14,046::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineForceStop) sent? sent MainThread::INFO::2018-01-09 10:15:14,464::hosted_engine::494::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineForceStop (score: 3400) MainThread::INFO::2018-01-09 10:15:14,467::hosted_engine::1002::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Shutting down vm using `/usr/sbin/hosted-engine --vm-poweroff` MainThread::INFO::2018-01-09 10:15:15,198::hosted_engine::1007::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stdout: MainThread::INFO::2018-01-09 10:15:15,198::hosted_engine::1008::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stderr: Command VM.destroy with args {'vmID': '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'9a8ea503-f598-433e-9751-93aee3e7b347'}) MainThread::ERROR::2018-01-09 10:15:15,199::hosted_engine::1013::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Failed to stop engine vm with /usr/sbin/hosted-engine --vm-poweroff: Command VM.destroy with args {'vmID': '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'9a8ea503-f598-433e-9751-93aee3e7b347'}) MainThread::ERROR::2018-01-09 10:15:15,199::hosted_engine::1019::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Failed to stop engine VM: Command VM.destroy with args {'vmID': '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'9a8ea503-f598-433e-9751-93aee3e7b347'}) MainThread::INFO::2018-01-09 10:15:15,317::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-ReinitializeFSM) sent? sent MainThread::INFO::2018-01-09 10:15:15,356::hosted_engine::494::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state ReinitializeFSM (score: 0) MainThread::INFO::2018-01-09 10:15:25,560::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (ReinitializeFSM-EngineDown) sent? sent Peter On 09/01/2018 09:35, Yedidyah Bar David wrote:
3) Hosted Engine HA: Hosted Engine HA on upgraded hosts is 3400, the same as on the 4.1 hosts. Is this good or bad?
It's good.
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> *CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>

the old hypervisoer /oVirt 4.1.8/ got probblem to release the HE due this exception. The HE is on the NFS store. MainThread::INFO::2018-01-09 10:40:28,497::upgrade::998::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(upgrade_35_36) Host configuration is already up-to-date MainThread::INFO::2018-01-09 10:40:28,498::config::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf) Reloading vm.conf from the shared storage domain MainThread::INFO::2018-01-09 10:40:28,498::config::416::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store) Trying to get a fresher copy of vm configuration from the OVF_STORE MainThread::INFO::2018-01-09 10:40:28,498::ovf_store::132::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE MainThread::INFO::2018-01-09 10:40:28,498::ovf_store::134::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/3981424d-a55c-4f07-bff2-aca316a95d1f/3513775f-d6b0-4423-be19-bbeb79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778 MainThread::INFO::2018-01-09 10:40:28,517::config::435::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store) Found an OVF for HE VM, trying to convert MainThread::ERROR::2018-01-09 10:40:28,523::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 191, in _run_agent return action(he) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 64, in action_proper return he.start_monitoring() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 421, in start_monitoring self._config.refresh_vm_conf() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", line 496, in refresh_vm_conf content_from_ovf = self._get_vm_conf_content_from_ovf_store() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", line 438, in _get_vm_conf_content_from_ovf_store conf = ovf2VmParams.confFromOvf(heovf) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py", line 283, in confFromOvf vmConf = toDict(ovf) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py", line 210, in toDict vmParams['vmId'] = tree.find('Content/Section').attrib[OVF_NS + 'id'] File "lxml.etree.pyx", line 2272, in lxml.etree._Attrib.__getitem__ (src/lxml/lxml.etree.c:55336) KeyError: '{http://schemas.dmtf.org/ovf/envelope/1/}id' On 09/01/2018 10:18, Peter Hudec wrote:
The HA is flapping between 3400 nad 0. ;( And I'm not able to migrate also any other Vm to this host.
Loggs fromthe /var/log/ovirt-hosted-engine-ha/agent.log file
MainThread::INFO::2018-01-08 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=8232312 (Mon Jan 8 21:44:29 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=8232316 (Mon Jan 8 21:44:33 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n', 'hostname': 'dipovirt03.cnc.sk', 'host-id': 1, 'engine-status': {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': 'f28d4648', 'local_conf_timestamp': 8232316, 'host-ts': 8232312} MainThread::INFO::2018-01-08 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host dipovirt02.cnc.sk (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetada...skipping... neVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/3981424d-a55c-4f07-bff2-aca316a95d1f/3513775f-d6b0-4423-be19-bbe b79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778 MainThread::INFO::2018-01-09 10:15:13,904::state_machine::169::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Global metadata: {'maintenance': False} MainThread::INFO::2018-01-09 10:15:13,905::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=38598 (Tue Jan 9 09:23:33 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=38598 (Tue Jan 9 09:23:34 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n', 'hostname': 'dipovirt03.cnc.sk', 'alive': False, 'host-id': 1, 'engine-status': {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '4c1d1890', 'local_conf_timestamp': 38598, 'host-ts': 38598} MainThread::INFO::2018-01-09 10:15:13,905::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host dipovirt02.cnc.sk (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=40677 (Tue Jan 9 09:24:11 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=40677 (Tue Jan 9 09:24:11 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n', 'hostname': 'dipovirt02.cnc.sk', 'alive': False, 'host-id': 3, 'engine-status': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '3bf104bc', 'local_conf_timestamp': 40677, 'host-ts': 40677} MainThread::INFO::2018-01-09 10:15:13,905::state_machine::177::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Local (id 2): {'engine-health': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge': True, 'mem-free': 39540.0, 'maintenance': False, 'cpu-load': 0.0432, 'gateway': 1.0, 'storage-domain': True} MainThread::INFO::2018-01-09 10:15:13,905::states::775::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Another host already took over.. MainThread::INFO::2018-01-09 10:15:13,928::state_decorators::88::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineForceStop'> MainThread::INFO::2018-01-09 10:15:14,046::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineForceStop) sent? sent MainThread::INFO::2018-01-09 10:15:14,464::hosted_engine::494::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineForceStop (score: 3400) MainThread::INFO::2018-01-09 10:15:14,467::hosted_engine::1002::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Shutting down vm using `/usr/sbin/hosted-engine --vm-poweroff` MainThread::INFO::2018-01-09 10:15:15,198::hosted_engine::1007::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stdout: MainThread::INFO::2018-01-09 10:15:15,198::hosted_engine::1008::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stderr: Command VM.destroy with args {'vmID': '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'9a8ea503-f598-433e-9751-93aee3e7b347'})
MainThread::ERROR::2018-01-09 10:15:15,199::hosted_engine::1013::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Failed to stop engine vm with /usr/sbin/hosted-engine --vm-poweroff: Command VM.destroy with args {'vmID': '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'9a8ea503-f598-433e-9751-93aee3e7b347'})
MainThread::ERROR::2018-01-09 10:15:15,199::hosted_engine::1019::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Failed to stop engine VM: Command VM.destroy with args {'vmID': '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'9a8ea503-f598-433e-9751-93aee3e7b347'})
MainThread::INFO::2018-01-09 10:15:15,317::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-ReinitializeFSM) sent? sent MainThread::INFO::2018-01-09 10:15:15,356::hosted_engine::494::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state ReinitializeFSM (score: 0) MainThread::INFO::2018-01-09 10:15:25,560::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (ReinitializeFSM-EngineDown) sent? sent
Peter
On 09/01/2018 09:35, Yedidyah Bar David wrote:
3) Hosted Engine HA: Hosted Engine HA on upgraded hosts is 3400, the same as on the 4.1 hosts. Is this good or bad?
It's good.
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> *CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>

quick fix is follow the https://gerrit.ovirt.org/#/c/84802/2/backend/manager/modules/utils/src/main/... and remove trailing '/' in /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py On 09/01/2018 10:45, Peter Hudec wrote:
the old hypervisoer /oVirt 4.1.8/ got probblem to release the HE due this exception. The HE is on the NFS store.
MainThread::INFO::2018-01-09 10:40:28,497::upgrade::998::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(upgrade_35_36) Host configuration is already up-to-date MainThread::INFO::2018-01-09 10:40:28,498::config::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf) Reloading vm.conf from the shared storage domain MainThread::INFO::2018-01-09 10:40:28,498::config::416::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store) Trying to get a fresher copy of vm configuration from the OVF_STORE MainThread::INFO::2018-01-09 10:40:28,498::ovf_store::132::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE MainThread::INFO::2018-01-09 10:40:28,498::ovf_store::134::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/3981424d-a55c-4f07-bff2-aca316a95d1f/3513775f-d6b0-4423-be19-bbeb79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778 MainThread::INFO::2018-01-09 10:40:28,517::config::435::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store) Found an OVF for HE VM, trying to convert MainThread::ERROR::2018-01-09 10:40:28,523::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 191, in _run_agent return action(he) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 64, in action_proper return he.start_monitoring() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 421, in start_monitoring self._config.refresh_vm_conf() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", line 496, in refresh_vm_conf content_from_ovf = self._get_vm_conf_content_from_ovf_store() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", line 438, in _get_vm_conf_content_from_ovf_store conf = ovf2VmParams.confFromOvf(heovf) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py", line 283, in confFromOvf vmConf = toDict(ovf) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py", line 210, in toDict vmParams['vmId'] = tree.find('Content/Section').attrib[OVF_NS + 'id'] File "lxml.etree.pyx", line 2272, in lxml.etree._Attrib.__getitem__ (src/lxml/lxml.etree.c:55336) KeyError: '{http://schemas.dmtf.org/ovf/envelope/1/}id'
On 09/01/2018 10:18, Peter Hudec wrote:
The HA is flapping between 3400 nad 0. ;( And I'm not able to migrate also any other Vm to this host.
Loggs fromthe /var/log/ovirt-hosted-engine-ha/agent.log file
MainThread::INFO::2018-01-08 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=8232312 (Mon Jan 8 21:44:29 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=8232316 (Mon Jan 8 21:44:33 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n', 'hostname': 'dipovirt03.cnc.sk', 'host-id': 1, 'engine-status': {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': 'f28d4648', 'local_conf_timestamp': 8232316, 'host-ts': 8232312} MainThread::INFO::2018-01-08 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host dipovirt02.cnc.sk (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetada...skipping... neVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/3981424d-a55c-4f07-bff2-aca316a95d1f/3513775f-d6b0-4423-be19-bbe b79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778 MainThread::INFO::2018-01-09 10:15:13,904::state_machine::169::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Global metadata: {'maintenance': False} MainThread::INFO::2018-01-09 10:15:13,905::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=38598 (Tue Jan 9 09:23:33 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=38598 (Tue Jan 9 09:23:34 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n', 'hostname': 'dipovirt03.cnc.sk', 'alive': False, 'host-id': 1, 'engine-status': {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '4c1d1890', 'local_conf_timestamp': 38598, 'host-ts': 38598} MainThread::INFO::2018-01-09 10:15:13,905::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host dipovirt02.cnc.sk (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=40677 (Tue Jan 9 09:24:11 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=40677 (Tue Jan 9 09:24:11 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n', 'hostname': 'dipovirt02.cnc.sk', 'alive': False, 'host-id': 3, 'engine-status': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '3bf104bc', 'local_conf_timestamp': 40677, 'host-ts': 40677} MainThread::INFO::2018-01-09 10:15:13,905::state_machine::177::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Local (id 2): {'engine-health': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge': True, 'mem-free': 39540.0, 'maintenance': False, 'cpu-load': 0.0432, 'gateway': 1.0, 'storage-domain': True} MainThread::INFO::2018-01-09 10:15:13,905::states::775::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Another host already took over.. MainThread::INFO::2018-01-09 10:15:13,928::state_decorators::88::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineForceStop'> MainThread::INFO::2018-01-09 10:15:14,046::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineForceStop) sent? sent MainThread::INFO::2018-01-09 10:15:14,464::hosted_engine::494::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineForceStop (score: 3400) MainThread::INFO::2018-01-09 10:15:14,467::hosted_engine::1002::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Shutting down vm using `/usr/sbin/hosted-engine --vm-poweroff` MainThread::INFO::2018-01-09 10:15:15,198::hosted_engine::1007::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stdout: MainThread::INFO::2018-01-09 10:15:15,198::hosted_engine::1008::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stderr: Command VM.destroy with args {'vmID': '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'9a8ea503-f598-433e-9751-93aee3e7b347'})
MainThread::ERROR::2018-01-09 10:15:15,199::hosted_engine::1013::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Failed to stop engine vm with /usr/sbin/hosted-engine --vm-poweroff: Command VM.destroy with args {'vmID': '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'9a8ea503-f598-433e-9751-93aee3e7b347'})
MainThread::ERROR::2018-01-09 10:15:15,199::hosted_engine::1019::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Failed to stop engine VM: Command VM.destroy with args {'vmID': '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'9a8ea503-f598-433e-9751-93aee3e7b347'})
MainThread::INFO::2018-01-09 10:15:15,317::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-ReinitializeFSM) sent? sent MainThread::INFO::2018-01-09 10:15:15,356::hosted_engine::494::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state ReinitializeFSM (score: 0) MainThread::INFO::2018-01-09 10:15:25,560::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (ReinitializeFSM-EngineDown) sent? sent
Peter
On 09/01/2018 09:35, Yedidyah Bar David wrote:
3) Hosted Engine HA: Hosted Engine HA on upgraded hosts is 3400, the same as on the 4.1 hosts. Is this good or bad?
It's good.
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> *CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>

On Tue, Jan 9, 2018 at 12:04 PM, Peter Hudec <phudec@cnc.sk> wrote:
quick fix is follow the https://gerrit.ovirt.org/#/c/84802/2/backend/manager/ modules/utils/src/main/java/org/ovirt/engine/core/utils/ ovf/IOvfBuilder.java
and remove trailing '/' in /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ ha/lib/ovf/ovf2VmParams.py
Adding Denis. Thanks for the logs!
On 09/01/2018 10:45, Peter Hudec wrote:
the old hypervisoer /oVirt 4.1.8/ got probblem to release the HE due this exception. The HE is on the NFS store.
MainThread::INFO::2018-01-09 10:40:28,497::upgrade::998::ovirt_hosted_engine_ha.lib. upgrade.StorageServer::(upgrade_35_36) Host configuration is already up-to-date MainThread::INFO::2018-01-09 10:40:28,498::config::493::ovirt_hosted_engine_ha.agent. hosted_engine.HostedEngine.config::(refresh_vm_conf) Reloading vm.conf from the shared storage domain MainThread::INFO::2018-01-09 10:40:28,498::config::416::ovirt_hosted_engine_ha.agent. hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store) Trying to get a fresher copy of vm configuration from the OVF_STORE MainThread::INFO::2018-01-09 10:40:28,498::ovf_store::132::ovirt_hosted_engine_ha.lib. ovf.ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE MainThread::INFO::2018-01-09 10:40:28,498::ovf_store::134::ovirt_hosted_engine_ha.lib. ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/3981424d-a55c-4f07-bff2- aca316a95d1f/3513775f-d6b0-4423-be19-bbeb79c72ad2/7ee3f450-5976-48f8-b667- 27b48f6cf778 MainThread::INFO::2018-01-09 10:40:28,517::config::435::ovirt_hosted_engine_ha.agent. hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store) Found an OVF for HE VM, trying to convert MainThread::ERROR::2018-01-09 10:40:28,523::agent::205::ovirt_hosted_engine_ha.agent. agent.Agent::(_run_agent) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ ha/agent/agent.py", line 191, in _run_agent return action(he) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ ha/agent/agent.py", line 64, in action_proper return he.start_monitoring() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ ha/agent/hosted_engine.py", line 421, in start_monitoring self._config.refresh_vm_conf() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", line 496, in refresh_vm_conf content_from_ovf = self._get_vm_conf_content_from_ovf_store() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", line 438, in _get_vm_conf_content_from_ovf_store conf = ovf2VmParams.confFromOvf(heovf) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ ha/lib/ovf/ovf2VmParams.py", line 283, in confFromOvf vmConf = toDict(ovf) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ ha/lib/ovf/ovf2VmParams.py", line 210, in toDict vmParams['vmId'] = tree.find('Content/Section').attrib[OVF_NS + 'id'] File "lxml.etree.pyx", line 2272, in lxml.etree._Attrib.__getitem__ (src/lxml/lxml.etree.c:55336) KeyError: '{http://schemas.dmtf.org/ovf/envelope/1/}id'
On 09/01/2018 10:18, Peter Hudec wrote:
The HA is flapping between 3400 nad 0. ;( And I'm not able to migrate also any other Vm to this host.
Loggs fromthe /var/log/ovirt-hosted-engine-ha/agent.log file
MainThread::INFO::2018-01-08 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(refresh) Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ ntimestamp=8232312 (Mon Jan 8 21:44:29 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=8232316 (Mon Jan 8 21:44:33 2018)\nconf_on_shared_storage=True\nmaintenance=False\ nstate=EngineUp\nstopped=False\n', 'hostname': 'dipovirt03.cnc.sk', 'host-id': 1, 'engine-status': {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': 'f28d4648', 'local_conf_timestamp': 8232316, 'host-ts': 8232312} MainThread::INFO::2018-01-08 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(refresh) Host dipovirt02.cnc.sk (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetada...skipping... neVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/3981424d-a55c-4f07-bff2- aca316a95d1f/3513775f-d6b0-4423-be19-bbe b79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778 MainThread::INFO::2018-01-09 10:15:13,904::state_machine::169::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(refresh) Global metadata: {'maintenance': False} MainThread::INFO::2018-01-09 10:15:13,905::state_machine::174::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(refresh) Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=38598 (Tue Jan 9 09:23:33 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=38598 (Tue Jan 9 09:23:34 2018)\nconf_on_shared_storage=True\nmaintenance=False\ nstate=EngineUp\nstopped=False\n', 'hostname': 'dipovirt03.cnc.sk', 'alive': False, 'host-id': 1, 'engine-status': {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '4c1d1890', 'local_conf_timestamp': 38598, 'host-ts': 38598} MainThread::INFO::2018-01-09 10:15:13,905::state_machine::174::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(refresh) Host dipovirt02.cnc.sk (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=40677 (Tue Jan 9 09:24:11 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=40677 (Tue Jan 9 09:24:11 2018)\nconf_on_shared_storage=True\nmaintenance=False\ nstate=EngineDown\nstopped=False\n', 'hostname': 'dipovirt02.cnc.sk', 'alive': False, 'host-id': 3, 'engine-status': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '3bf104bc', 'local_conf_timestamp': 40677, 'host-ts': 40677} MainThread::INFO::2018-01-09 10:15:13,905::state_machine::177::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(refresh) Local (id 2): {'engine-health': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge': True, 'mem-free': 39540.0, 'maintenance': False, 'cpu-load': 0.0432, 'gateway': 1.0, 'storage-domain': True} MainThread::INFO::2018-01-09 10:15:13,905::states::775::ovirt_hosted_engine_ha.agent. hosted_engine.HostedEngine::(consume) Another host already took over.. MainThread::INFO::2018-01-09 10:15:13,928::state_decorators::88::ovirt_hosted_ engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineForceStop'> MainThread::INFO::2018-01-09 10:15:14,046::brokerlink::68::ovirt_hosted_engine_ha.lib. brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineForceStop) sent? sent MainThread::INFO::2018-01-09 10:15:14,464::hosted_engine::494::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineForceStop (score: 3400) MainThread::INFO::2018-01-09 10:15:14,467::hosted_engine::1002::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(_stop_engine_vm) Shutting down vm using `/usr/sbin/hosted-engine --vm-poweroff` MainThread::INFO::2018-01-09 10:15:15,198::hosted_engine::1007::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(_stop_engine_vm) stdout: MainThread::INFO::2018-01-09 10:15:15,198::hosted_engine::1008::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(_stop_engine_vm) stderr: Command VM.destroy with args {'vmID': '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'9a8ea503-f598-433e-9751-93aee3e7b347'})
MainThread::ERROR::2018-01-09 10:15:15,199::hosted_engine::1013::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(_stop_engine_vm) Failed to stop engine vm with /usr/sbin/hosted-engine --vm-poweroff: Command VM.destroy with args {'vmID': '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'9a8ea503-f598-433e-9751-93aee3e7b347'})
MainThread::ERROR::2018-01-09 10:15:15,199::hosted_engine::1019::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(_stop_engine_vm) Failed to stop engine VM: Command VM.destroy with args {'vmID': '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'9a8ea503-f598-433e-9751-93aee3e7b347'})
MainThread::INFO::2018-01-09 10:15:15,317::brokerlink::68::ovirt_hosted_engine_ha.lib. brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-ReinitializeFSM) sent? sent MainThread::INFO::2018-01-09 10:15:15,356::hosted_engine::494::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state ReinitializeFSM (score: 0) MainThread::INFO::2018-01-09 10:15:25,560::brokerlink::68::ovirt_hosted_engine_ha.lib. brokerlink.BrokerLink::(notify) Success, was notification of state_transition (ReinitializeFSM-EngineDown) sent? sent
Peter
On 09/01/2018 09:35, Yedidyah Bar David wrote:
3) Hosted Engine HA: Hosted Engine HA on upgraded hosts is 3400, the same as on the 4.1 hosts. Is this good or bad?
It's good.
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk>
*CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100
Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>
-- Didi

guest migration issue was due the firewalld, in logs a I found in logs 2018-01-09 10:12:06,700+0100 ERROR (migsrc/d6e3745b) [virt.vm] (vmId='d6e3745b-1444-42a3-8cc0-29eaf59b8520') Failed to migrate (migration:429) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 382, in run self._setupVdsConnection() File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 219, in _setupVdsConnection client = self._createClient(port) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 206, in _createClient client_socket = utils.create_connected_socket(host, int(port), sslctx) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 950, in create_connected_socket sock.connect((host, port)) File "/usr/lib64/python2.7/site-packages/M2Crypto/SSL/Connection.py", line 181, in connect self.socket.connect(addr) File "/usr/lib64/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) error: [Errno 113] No route to host disabling the firewalld solved the issue, so back to the orininal problem with the firewall ;) On 09/01/2018 11:08, Yedidyah Bar David wrote:
On Tue, Jan 9, 2018 at 12:04 PM, Peter Hudec <phudec@cnc.sk <mailto:phudec@cnc.sk>> wrote:
quick fix is follow the https://gerrit.ovirt.org/#/c/84802/2/backend/manager/modules/utils/src/main/... <https://gerrit.ovirt.org/#/c/84802/2/backend/manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/IOvfBuilder.java>
and remove trailing '/' in /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py
Adding Denis. Thanks for the logs!
On 09/01/2018 10:45, Peter Hudec wrote: > the old hypervisoer /oVirt 4.1.8/ got probblem to release the HE due > this exception. The HE is on the NFS store. > > MainThread::INFO::2018-01-09 > 10:40:28,497::upgrade::998::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(upgrade_35_36) > Host configuration is already up-to-date > MainThread::INFO::2018-01-09 > 10:40:28,498::config::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf) > Reloading vm.conf from the shared storage domain > MainThread::INFO::2018-01-09 > 10:40:28,498::config::416::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store) > Trying to get a fresher copy of vm configuration from the OVF_STORE > MainThread::INFO::2018-01-09 > 10:40:28,498::ovf_store::132::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) > Extracting Engine VM OVF from the OVF_STORE > MainThread::INFO::2018-01-09 > 10:40:28,498::ovf_store::134::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) > OVF_STORE volume path: > /var/run/vdsm/storage/3981424d-a55c-4f07-bff2-aca316a95d1f/3513775f-d6b0-4423-be19-bbeb79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778 > MainThread::INFO::2018-01-09 > 10:40:28,517::config::435::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store) > Found an OVF for HE VM, trying to convert > MainThread::ERROR::2018-01-09 > 10:40:28,523::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) > Traceback (most recent call last): > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", > line 191, in _run_agent > return action(he) > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", > line 64, in action_proper > return he.start_monitoring() > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", > line 421, in start_monitoring > self._config.refresh_vm_conf() > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", > line 496, in refresh_vm_conf > content_from_ovf = self._get_vm_conf_content_from_ovf_store() > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", > line 438, in _get_vm_conf_content_from_ovf_store > conf = ovf2VmParams.confFromOvf(heovf) > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py", > line 283, in confFromOvf > vmConf = toDict(ovf) > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py", > line 210, in toDict > vmParams['vmId'] = tree.find('Content/Section').attrib[OVF_NS + 'id'] > File "lxml.etree.pyx", line 2272, in lxml.etree._Attrib.__getitem__ > (src/lxml/lxml.etree.c:55336) > KeyError: '{http://schemas.dmtf.org/ovf/envelope/1/}id <http://schemas.dmtf.org/ovf/envelope/1/}id>' > > > On 09/01/2018 10:18, Peter Hudec wrote: >> The HA is flapping between 3400 nad 0. ;( >> And I'm not able to migrate also any other Vm to this host. >> >> Loggs fromthe /var/log/ovirt-hosted-engine-ha/agent.log file >> >> MainThread::INFO::2018-01-08 >> 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) >> Host dipovirt03.cnc.sk <http://dipovirt03.cnc.sk> (id 1): {'conf_on_shared_storage': True, 'extra': >> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=8232312 >> (Mon Jan 8 21:44:29 >> 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=8232316 (Mon Jan 8 >> 21:44:33 >> 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n', >> 'hostname': 'dipovirt03.cnc.sk <http://dipovirt03.cnc.sk>', 'host-id': 1, 'engine-status': >> {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 3400, >> 'stopped': False, 'maintenance': False, 'crc32': 'f28d4648', >> 'local_conf_timestamp': 8232316, 'host-ts': 8232312} >> MainThread::INFO::2018-01-08 >> 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) >> Host dipovirt02.cnc.sk <http://dipovirt02.cnc.sk> (id 3): {'conf_on_shared_storage': True, 'extra': >> 'metadata_parse_version=1\nmetada...skipping... >> neVMOVF) OVF_STORE volume path: >> /var/run/vdsm/storage/3981424d-a55c-4f07-bff2-aca316a95d1f/3513775f-d6b0-4423-be19-bbe >> b79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778 >> MainThread::INFO::2018-01-09 >> 10:15:13,904::state_machine::169::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) >> Global metadata: {'maintenance': False} >> MainThread::INFO::2018-01-09 >> 10:15:13,905::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) >> Host dipovirt03.cnc.sk <http://dipovirt03.cnc.sk> (id 1): {'conf_on_shared_storage': True, 'extra': >> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=38598 >> (Tue Jan 9 09:23:33 >> 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=38598 (Tue Jan 9 >> 09:23:34 >> 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n', >> 'hostname': 'dipovirt03.cnc.sk <http://dipovirt03.cnc.sk>', 'alive': False, 'host-id': 1, >> 'engine-status': {'health': 'good', 'vm': 'up', 'detail': 'up'}, >> 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': >> '4c1d1890', 'local_conf_timestamp': 38598, 'host-ts': 38598} >> MainThread::INFO::2018-01-09 >> 10:15:13,905::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) >> Host dipovirt02.cnc.sk <http://dipovirt02.cnc.sk> (id 3): {'conf_on_shared_storage': True, 'extra': >> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=40677 >> (Tue Jan 9 09:24:11 >> 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=40677 (Tue Jan 9 >> 09:24:11 >> 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n', >> 'hostname': 'dipovirt02.cnc.sk <http://dipovirt02.cnc.sk>', 'alive': False, 'host-id': 3, >> 'engine-status': {'reason': 'vm not running on this host', 'health': >> 'bad', 'vm': 'down', 'detail': 'unknown'}, 'score': 3400, 'stopped': >> False, 'maintenance': False, 'crc32': '3bf104bc', >> 'local_conf_timestamp': 40677, 'host-ts': 40677} >> MainThread::INFO::2018-01-09 >> 10:15:13,905::state_machine::177::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) >> Local (id 2): {'engine-health': {'reason': 'vm not running on this >> host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge': >> True, 'mem-free': 39540.0, 'maintenance': False, 'cpu-load': 0.0432, >> 'gateway': 1.0, 'storage-domain': True} >> MainThread::INFO::2018-01-09 >> 10:15:13,905::states::775::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) >> Another host already took over.. >> MainThread::INFO::2018-01-09 >> 10:15:13,928::state_decorators::88::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) >> Timeout cleared while transitioning <class >> 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class >> 'ovirt_hosted_engine_ha.agent.states.EngineForceStop'> >> MainThread::INFO::2018-01-09 >> 10:15:14,046::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) >> Success, was notification of state_transition >> (EngineStarting-EngineForceStop) sent? sent >> MainThread::INFO::2018-01-09 >> 10:15:14,464::hosted_engine::494::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) >> Current state EngineForceStop (score: 3400) >> MainThread::INFO::2018-01-09 >> 10:15:14,467::hosted_engine::1002::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) >> Shutting down vm using `/usr/sbin/hosted-engine --vm-poweroff` >> MainThread::INFO::2018-01-09 >> 10:15:15,198::hosted_engine::1007::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) >> stdout: >> MainThread::INFO::2018-01-09 >> 10:15:15,198::hosted_engine::1008::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) >> stderr: Command VM.destroy with args {'vmID': >> '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: >> (code=1, message=Virtual machine does not exist: {'vmId': >> u'9a8ea503-f598-433e-9751-93aee3e7b347'}) >> >> MainThread::ERROR::2018-01-09 >> 10:15:15,199::hosted_engine::1013::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) >> Failed to stop engine vm with /usr/sbin/hosted-engine --vm-poweroff: >> Command VM.destroy with args {'vmID': >> '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: >> (code=1, message=Virtual machine does not exist: {'vmId': >> u'9a8ea503-f598-433e-9751-93aee3e7b347'}) >> >> MainThread::ERROR::2018-01-09 >> 10:15:15,199::hosted_engine::1019::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) >> Failed to stop engine VM: Command VM.destroy with args {'vmID': >> '9a8ea503-f598-433e-9751-93aee3e7b347'} failed: >> (code=1, message=Virtual machine does not exist: {'vmId': >> u'9a8ea503-f598-433e-9751-93aee3e7b347'}) >> >> MainThread::INFO::2018-01-09 >> 10:15:15,317::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) >> Success, was notification of state_transition >> (EngineForceStop-ReinitializeFSM) sent? sent >> MainThread::INFO::2018-01-09 >> 10:15:15,356::hosted_engine::494::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) >> Current state ReinitializeFSM (score: 0) >> MainThread::INFO::2018-01-09 >> 10:15:25,560::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) >> Success, was notification of state_transition >> (ReinitializeFSM-EngineDown) sent? sent >> >> Peter >> >> On 09/01/2018 09:35, Yedidyah Bar David wrote: >>> >>> 3) Hosted Engine HA: >>> Hosted Engine HA on upgraded hosts is 3400, the same as on the 4.1 >>> hosts. Is this good or bad? >>> >>> >>> It's good. >> >> > >
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> <mailto:phudec@cnc.sk <mailto:phudec@cnc.sk>>
*CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 <tel:%2B421%202%C2%A0%2035%20000%20100>
Mobil:+421 905 997 203 <tel:%2B421%C2%A0905%20997%20203> *www.cnc.sk <http://www.cnc.sk>* <http:///www.cnc.sk <http://www.cnc.sk>>
-- Didi
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> *CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>

If I run host reinstall with custom firewall rules in /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml the task will fails due the firewalld is not running. The reinstall task will disable firewalld and enable iptables-services. I'm little bit confused ;( --- - name: Enable additional port on firewalld firewalld: port: "10050/tcp" permanent: yes immediate: yes state: enabled 2018-01-09 13:27:30,103 p=13550 u=ovirt | included: /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml for dipovirt01.cnc.sk 2018-01-09 13:27:30,134 p=13550 u=ovirt | TASK [Enable additional port on firewalld] ************************************* 2018-01-09 13:27:32,089 p=13550 u=ovirt | fatal: [dipovirt01.cnc.sk]: FAILED! => {"changed": false, "module_stderr": "Shared connection to dipovirt01.cnc.sk closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 936, in <module>\r\n main()\r\n File \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 788, in main\r\n module.fail(msg='firewall is not currently running, unable to perform immediate actions without a running firewall daemon')\r\nAttributeError: 'AnsibleModule' object has no attribute 'fail'\r\n", "msg": "MODULE FAILURE", "rc": 0} 2018-01-09 13:27:32,095 p=13550 u=ovirt | PLAY RECAP ********************************************************************* After reinstalation the status of firewalld is [PROD] root@dipovirt01.cnc.sk: /var/log/vdsm # systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) So how could I switch to firewalld? package iptables-service could not be removed due the dependencies. Peter On 09/01/2018 09:35, Yedidyah Bar David wrote:
1) firewalld after upgrade the hot server, the i needed to stop firewalld. It seems, that, the rules are not generated correctly. The engine was not able to connect to the host. How do I could fix it?
Please check/share relevant files from /var/log/ovirt-engine/ansible/ and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and attach them there.
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> *CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>

(Adding Ondra for the firewalld stuff. But I think it's probably easier to debug if you open a bug and attach logs there). On Tue, Jan 9, 2018 at 2:34 PM, Peter Hudec <phudec@cnc.sk> wrote:
If I run host reinstall with custom firewall rules in /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml the task will fails due the firewalld is not running.
The reinstall task will disable firewalld and enable iptables-services. I'm little bit confused ;(
--- - name: Enable additional port on firewalld firewalld: port: "10050/tcp" permanent: yes immediate: yes state: enabled
2018-01-09 13:27:30,103 p=13550 u=ovirt | included: /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml for dipovirt01.cnc.sk 2018-01-09 13:27:30,134 p=13550 u=ovirt | TASK [Enable additional port on firewalld] ************************************* 2018-01-09 13:27:32,089 p=13550 u=ovirt | fatal: [dipovirt01.cnc.sk]: FAILED! => {"changed": false, "module_stderr": "Shared connection to dipovirt01.cnc.sk closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 936, in <module>\r\n main()\r\n File \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 788, in main\r\n module.fail(msg='firewall is not currently running, unable to perform immediate actions without a running firewall daemon')\r\nAttributeError: 'AnsibleModule' object has no attribute 'fail'\r\n", "msg": "MODULE FAILURE", "rc": 0} 2018-01-09 13:27:32,095 p=13550 u=ovirt | PLAY RECAP *********************************************************************
After reinstalation the status of firewalld is [PROD] root@dipovirt01.cnc.sk: /var/log/vdsm # systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1)
So how could I switch to firewalld? package iptables-service could not be removed due the dependencies.
Peter
On 09/01/2018 09:35, Yedidyah Bar David wrote:
1) firewalld after upgrade the hot server, the i needed to stop firewalld. It
seems,
that, the rules are not generated correctly. The engine was not able
to
connect to the host. How do I could fix it?
Please check/share relevant files from /var/log/ovirt-engine/ansible/ and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and attach them there.
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk>
*CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100
Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>
-- Didi

It's not a bug as I'm digging. In logs I found 2018-01-09 08:23:22,421+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV NETWORK/firewalldEnable=bool:'False' 2018-01-09 08:23:22,422+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV NETWORK/iptablesEnable=bool:'True' So how to disable iptables and enable firewalld ? Peter On 09/01/2018 13:47, Yedidyah Bar David wrote:
(Adding Ondra for the firewalld stuff. But I think it's probably easier to debug if you open a bug and attach logs there).
On Tue, Jan 9, 2018 at 2:34 PM, Peter Hudec <phudec@cnc.sk <mailto:phudec@cnc.sk>> wrote:
If I run host reinstall with custom firewall rules in /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml the task will fails due the firewalld is not running.
The reinstall task will disable firewalld and enable iptables-services. I'm little bit confused ;(
--- - name: Enable additional port on firewalld firewalld: port: "10050/tcp" permanent: yes immediate: yes state: enabled
2018-01-09 13:27:30,103 p=13550 u=ovirt | included: /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml for dipovirt01.cnc.sk <http://dipovirt01.cnc.sk> 2018-01-09 13:27:30,134 p=13550 u=ovirt | TASK [Enable additional port on firewalld] ************************************* 2018-01-09 13:27:32,089 p=13550 u=ovirt | fatal: [dipovirt01.cnc.sk <http://dipovirt01.cnc.sk>]: FAILED! => {"changed": false, "module_stderr": "Shared connection to dipovirt01.cnc.sk <http://dipovirt01.cnc.sk> closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 936, in <module>\r\n main()\r\n File \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 788, in main\r\n module.fail(msg='firewall is not currently running, unable to perform immediate actions without a running firewall daemon')\r\nAttributeError: 'AnsibleModule' object has no attribute 'fail'\r\n", "msg": "MODULE FAILURE", "rc": 0} 2018-01-09 13:27:32,095 p=13550 u=ovirt | PLAY RECAP *********************************************************************
After reinstalation the status of firewalld is [PROD] root@dipovirt01.cnc.sk <mailto:root@dipovirt01.cnc.sk>: /var/log/vdsm # systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1)
So how could I switch to firewalld? package iptables-service could not be removed due the dependencies.
Peter
On 09/01/2018 09:35, Yedidyah Bar David wrote: > > 1) firewalld > after upgrade the hot server, the i needed to stop firewalld. It seems, > that, the rules are not generated correctly. The engine was not able to > connect to the host. How do I could fix it? > > > Please check/share relevant files from /var/log/ovirt-engine/ansible/ > and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and > attach them there.
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> <mailto:phudec@cnc.sk <mailto:phudec@cnc.sk>>
*CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 <tel:%2B421%202%C2%A0%2035%20000%20100>
Mobil:+421 905 997 203 <tel:%2B421%C2%A0905%20997%20203> *www.cnc.sk <http://www.cnc.sk>* <http:///www.cnc.sk <http://www.cnc.sk>>
-- Didi
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> *CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>

On Tue, Jan 9, 2018 at 2:25 PM, Peter Hudec <phudec@cnc.sk> wrote:
It's not a bug as I'm digging.
In logs I found
2018-01-09 08:23:22,421+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV NETWORK/firewalldEnable=bool:'False' 2018-01-09 08:23:22,422+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV NETWORK/iptablesEnable=bool:'True'
So how to disable iptables and enable firewalld ?
Hi, firewall type is a cluster level option. Please go to Clusters, edit selected cluster and change Firewall type to firewalld. After that you need to execute Reinstall on all hosts in the cluster to switch from iptables to firewalld on them. Btw, I assume this is upgraded cluster, so please make sure that VDSM 4.20 (from oVirt 4.2) is installed on all hosts before making this change. Thanks Martin
Peter
(Adding Ondra for the firewalld stuff. But I think it's probably easier to debug if you open a bug and attach logs there).
On Tue, Jan 9, 2018 at 2:34 PM, Peter Hudec <phudec@cnc.sk <mailto:phudec@cnc.sk>> wrote:
If I run host reinstall with custom firewall rules in /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml the task will fails due the firewalld is not running.
The reinstall task will disable firewalld and enable iptables-services. I'm little bit confused ;(
--- - name: Enable additional port on firewalld firewalld: port: "10050/tcp" permanent: yes immediate: yes state: enabled
2018-01-09 13:27:30,103 p=13550 u=ovirt | included: /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml for dipovirt01.cnc.sk <http://dipovirt01.cnc.sk> 2018-01-09 13:27:30,134 p=13550 u=ovirt | TASK [Enable additional
On 09/01/2018 13:47, Yedidyah Bar David wrote: port
on firewalld] ************************************* 2018-01-09 13:27:32,089 p=13550 u=ovirt | fatal: [dipovirt01.cnc.sk <http://dipovirt01.cnc.sk>]: FAILED! => {"changed": false, "module_stderr": "Shared connection to dipovirt01.cnc.sk <http://dipovirt01.cnc.sk> closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 936, in <module>\r\n main()\r\n File \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 788, in main\r\n module.fail(msg='firewall is not currently running,
unable
to perform immediate actions without a running firewall daemon')\r\nAttributeError: 'AnsibleModule' object has no attribute 'fail'\r\n", "msg": "MODULE FAILURE", "rc": 0} 2018-01-09 13:27:32,095 p=13550 u=ovirt | PLAY RECAP ************************************************************
*********
After reinstalation the status of firewalld is [PROD] root@dipovirt01.cnc.sk <mailto:root@dipovirt01.cnc.sk>: /var/log/vdsm # systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service;
disabled;
vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1)
So how could I switch to firewalld? package iptables-service could
not
be removed due the dependencies.
Peter
On 09/01/2018 09:35, Yedidyah Bar David wrote: > > 1) firewalld > after upgrade the hot server, the i needed to stop firewalld.
It seems,
> that, the rules are not generated correctly. The engine was
not able to
> connect to the host. How do I could fix it? > > > Please check/share relevant files from
/var/log/ovirt-engine/ansible/
> and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and > attach them there.
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> <mailto:phudec@cnc.sk <mailto:phudec@cnc.sk>>
*CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 <tel:%2B421%202%C2%A0%2035%20000%20100>
Mobil:+421 905 997 203 <tel:%2B421%C2%A0905%20997%20203> *www.cnc.sk <http://www.cnc.sk>* <http:///www.cnc.sk <http://www.cnc.sk>>
-- Didi
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk>
*CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100
Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o.

thanks. The upgrade is in process. I needed to solve some issues before. Peter On 09/01/2018 14:39, Martin Perina wrote:
On Tue, Jan 9, 2018 at 2:25 PM, Peter Hudec <phudec@cnc.sk <mailto:phudec@cnc.sk>> wrote:
It's not a bug as I'm digging.
In logs I found
2018-01-09 08:23:22,421+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV NETWORK/firewalldEnable=bool:'False' 2018-01-09 08:23:22,422+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV NETWORK/iptablesEnable=bool:'True'
So how to disable iptables and enable firewalld ?
Hi,
firewall type is a cluster level option. Please go to Clusters, edit selected cluster and change Firewall type to firewalld. After that you need to execute Reinstall on all hosts in the cluster to switch from iptables to firewalld on them.
Btw, I assume this is upgraded cluster, so please make sure that VDSM 4.20 (from oVirt 4.2) is installed on all hosts before making this change.
Thanks
Martin
Peter
On 09/01/2018 13:47, Yedidyah Bar David wrote: > (Adding Ondra for the firewalld stuff. But I think it's probably > easier to debug if you open a bug and attach logs there). > > On Tue, Jan 9, 2018 at 2:34 PM, Peter Hudec <phudec@cnc.sk <mailto:phudec@cnc.sk> > <mailto:phudec@cnc.sk <mailto:phudec@cnc.sk>>> wrote: > > If I run host reinstall with custom firewall rules in > /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml the task will > fails due the firewalld is not running. > > The reinstall task will disable firewalld and enable iptables-services. > I'm little bit confused ;( > > --- > - name: Enable additional port on firewalld > firewalld: > port: "10050/tcp" > permanent: yes > immediate: yes > state: enabled > > > 2018-01-09 13:27:30,103 p=13550 u=ovirt | included: > /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml for > dipovirt01.cnc.sk <http://dipovirt01.cnc.sk> <http://dipovirt01.cnc.sk> > 2018-01-09 13:27:30,134 p=13550 u=ovirt | TASK [Enable additional port > on firewalld] ************************************* > 2018-01-09 13:27:32,089 p=13550 u=ovirt | fatal: [dipovirt01.cnc.sk <http://dipovirt01.cnc.sk> > <http://dipovirt01.cnc.sk>]: > FAILED! => {"changed": false, "module_stderr": "Shared connection to > dipovirt01.cnc.sk <http://dipovirt01.cnc.sk> <http://dipovirt01.cnc.sk> closed.\r\n", > "module_stdout": "Traceback (most recent > call last):\r\n File > \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 936, in > <module>\r\n main()\r\n File > \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 788, in > main\r\n module.fail(msg='firewall is not currently running, unable > to perform immediate actions without a running firewall > daemon')\r\nAttributeError: 'AnsibleModule' object has no attribute > 'fail'\r\n", "msg": "MODULE FAILURE", "rc": 0} > 2018-01-09 13:27:32,095 p=13550 u=ovirt | PLAY RECAP > ********************************************************************* > > > After reinstalation the status of firewalld is > [PROD] root@dipovirt01.cnc.sk <mailto:root@dipovirt01.cnc.sk> <mailto:root@dipovirt01.cnc.sk <mailto:root@dipovirt01.cnc.sk>>: > /var/log/vdsm # systemctl status firewalld > ● firewalld.service - firewalld - dynamic firewall daemon > Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; > vendor preset: enabled) > Active: inactive (dead) > Docs: man:firewalld(1) > > > So how could I switch to firewalld? package iptables-service could not > be removed due the dependencies. > > Peter > > On 09/01/2018 09:35, Yedidyah Bar David wrote: > > > > 1) firewalld > > after upgrade the hot server, the i needed to stop firewalld. It seems, > > that, the rules are not generated correctly. The engine was not able to > > connect to the host. How do I could fix it? > > > > > > Please check/share relevant files from /var/log/ovirt-engine/ansible/ > > and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and > > attach them there. > > > -- > *Peter Hudec* > Infraštruktúrny architekt > phudec@cnc.sk <mailto:phudec@cnc.sk> <mailto:phudec@cnc.sk <mailto:phudec@cnc.sk>> <mailto:phudec@cnc.sk <mailto:phudec@cnc.sk> > <mailto:phudec@cnc.sk <mailto:phudec@cnc.sk>>> > > *CNC, a.s.* > Borská 6, 841 04 Bratislava > Recepcia: +421 2 35 000 100 <tel:%2B421%202%C2%A0%2035%20000%20100> <tel:%2B421%202%C2%A0%2035%20000%20100> > > Mobil:+421 905 997 203 <tel:%2B421%C2%A0905%20997%20203> > *www.cnc.sk <http://www.cnc.sk> <http://www.cnc.sk>* <http:///www.cnc.sk <http://www.cnc.sk> > <http://www.cnc.sk>> > > > > > -- > Didi
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> <mailto:phudec@cnc.sk <mailto:phudec@cnc.sk>>
*CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 <tel:%2B421%202%C2%A0%2035%20000%20100>
Mobil:+421 905 997 203 <tel:%2B421%C2%A0905%20997%20203> *www.cnc.sk <http://www.cnc.sk>* <http:///www.cnc.sk <http://www.cnc.sk>>
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
-- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o.
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> *CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>

On Tue, Jan 9, 2018 at 3:25 PM, Peter Hudec <phudec@cnc.sk> wrote:
It's not a bug as I'm digging.
Very well :-)
In logs I found
Which logs?
2018-01-09 08:23:22,421+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV NETWORK/firewalldEnable=bool:'False' 2018-01-09 08:23:22,422+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV NETWORK/iptablesEnable=bool:'True'
So how to disable iptables and enable firewalld ?
If host-deploy, then it's a per-host/per-cluster option you should be able to choose in the web admin ui.
Peter
(Adding Ondra for the firewalld stuff. But I think it's probably easier to debug if you open a bug and attach logs there).
On Tue, Jan 9, 2018 at 2:34 PM, Peter Hudec <phudec@cnc.sk <mailto:phudec@cnc.sk>> wrote:
If I run host reinstall with custom firewall rules in /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml the task will fails due the firewalld is not running.
The reinstall task will disable firewalld and enable iptables-services. I'm little bit confused ;(
--- - name: Enable additional port on firewalld firewalld: port: "10050/tcp" permanent: yes immediate: yes state: enabled
2018-01-09 13:27:30,103 p=13550 u=ovirt | included: /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml for dipovirt01.cnc.sk <http://dipovirt01.cnc.sk> 2018-01-09 13:27:30,134 p=13550 u=ovirt | TASK [Enable additional
On 09/01/2018 13:47, Yedidyah Bar David wrote: port
on firewalld] ************************************* 2018-01-09 13:27:32,089 p=13550 u=ovirt | fatal: [dipovirt01.cnc.sk <http://dipovirt01.cnc.sk>]: FAILED! => {"changed": false, "module_stderr": "Shared connection to dipovirt01.cnc.sk <http://dipovirt01.cnc.sk> closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 936, in <module>\r\n main()\r\n File \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 788, in main\r\n module.fail(msg='firewall is not currently running,
unable
to perform immediate actions without a running firewall daemon')\r\nAttributeError: 'AnsibleModule' object has no attribute 'fail'\r\n", "msg": "MODULE FAILURE", "rc": 0} 2018-01-09 13:27:32,095 p=13550 u=ovirt | PLAY RECAP ************************************************************
*********
After reinstalation the status of firewalld is [PROD] root@dipovirt01.cnc.sk <mailto:root@dipovirt01.cnc.sk>: /var/log/vdsm # systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service;
disabled;
vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1)
So how could I switch to firewalld? package iptables-service could
not
be removed due the dependencies.
Peter
On 09/01/2018 09:35, Yedidyah Bar David wrote: > > 1) firewalld > after upgrade the hot server, the i needed to stop firewalld.
It seems,
> that, the rules are not generated correctly. The engine was
not able to
> connect to the host. How do I could fix it? > > > Please check/share relevant files from
/var/log/ovirt-engine/ansible/
> and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and > attach them there.
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> <mailto:phudec@cnc.sk <mailto:phudec@cnc.sk>>
*CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 <tel:%2B421%202%C2%A0%2035%20000%20100>
Mobil:+421 905 997 203 <tel:%2B421%C2%A0905%20997%20203> *www.cnc.sk <http://www.cnc.sk>* <http:///www.cnc.sk <http://www.cnc.sk>>
-- Didi
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk>
*CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100
Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>
-- Didi

for example ./ovirt-engine/host-deploy/ovirt-host-mgmt-20180101111924-dipovirt03.cnc.sk-15970547.log ansible firewalld module is not working if the firewalld is not running. In my case the cluster settings is set to 'iptables' so thos was my mistake. Consider this to be solved ;) Peter On 09/01/2018 15:01, Yedidyah Bar David wrote:
On Tue, Jan 9, 2018 at 3:25 PM, Peter Hudec <phudec@cnc.sk <mailto:phudec@cnc.sk>> wrote:
It's not a bug as I'm digging.
Very well :-)
In logs I found
Which logs?
2018-01-09 08:23:22,421+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV NETWORK/firewalldEnable=bool:'False' 2018-01-09 08:23:22,422+0100 DEBUG otopi.context context.dumpEnvironment:831 ENV NETWORK/iptablesEnable=bool:'True'
So how to disable iptables and enable firewalld ?
If host-deploy, then it's a per-host/per-cluster option you should be able to choose in the web admin ui.
Peter
On 09/01/2018 13:47, Yedidyah Bar David wrote: > (Adding Ondra for the firewalld stuff. But I think it's probably > easier to debug if you open a bug and attach logs there). > > On Tue, Jan 9, 2018 at 2:34 PM, Peter Hudec <phudec@cnc.sk <mailto:phudec@cnc.sk> > <mailto:phudec@cnc.sk <mailto:phudec@cnc.sk>>> wrote: > > If I run host reinstall with custom firewall rules in > /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml the task will > fails due the firewalld is not running. > > The reinstall task will disable firewalld and enable iptables-services. > I'm little bit confused ;( > > --- > - name: Enable additional port on firewalld > firewalld: > port: "10050/tcp" > permanent: yes > immediate: yes > state: enabled > > > 2018-01-09 13:27:30,103 p=13550 u=ovirt | included: > /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml for > dipovirt01.cnc.sk <http://dipovirt01.cnc.sk> <http://dipovirt01.cnc.sk> > 2018-01-09 13:27:30,134 p=13550 u=ovirt | TASK [Enable additional port > on firewalld] ************************************* > 2018-01-09 13:27:32,089 p=13550 u=ovirt | fatal: [dipovirt01.cnc.sk <http://dipovirt01.cnc.sk> > <http://dipovirt01.cnc.sk>]: > FAILED! => {"changed": false, "module_stderr": "Shared connection to > dipovirt01.cnc.sk <http://dipovirt01.cnc.sk> <http://dipovirt01.cnc.sk> closed.\r\n", > "module_stdout": "Traceback (most recent > call last):\r\n File > \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 936, in > <module>\r\n main()\r\n File > \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 788, in > main\r\n module.fail(msg='firewall is not currently running, unable > to perform immediate actions without a running firewall > daemon')\r\nAttributeError: 'AnsibleModule' object has no attribute > 'fail'\r\n", "msg": "MODULE FAILURE", "rc": 0} > 2018-01-09 13:27:32,095 p=13550 u=ovirt | PLAY RECAP > ********************************************************************* > > > After reinstalation the status of firewalld is > [PROD] root@dipovirt01.cnc.sk <mailto:root@dipovirt01.cnc.sk> <mailto:root@dipovirt01.cnc.sk <mailto:root@dipovirt01.cnc.sk>>: > /var/log/vdsm # systemctl status firewalld > ● firewalld.service - firewalld - dynamic firewall daemon > Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; > vendor preset: enabled) > Active: inactive (dead) > Docs: man:firewalld(1) > > > So how could I switch to firewalld? package iptables-service could not > be removed due the dependencies. > > Peter > > On 09/01/2018 09:35, Yedidyah Bar David wrote: > > > > 1) firewalld > > after upgrade the hot server, the i needed to stop firewalld. It seems, > > that, the rules are not generated correctly. The engine was not able to > > connect to the host. How do I could fix it? > > > > > > Please check/share relevant files from /var/log/ovirt-engine/ansible/ > > and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and > > attach them there. > > > -- > *Peter Hudec* > Infraštruktúrny architekt > phudec@cnc.sk <mailto:phudec@cnc.sk> <mailto:phudec@cnc.sk <mailto:phudec@cnc.sk>> <mailto:phudec@cnc.sk <mailto:phudec@cnc.sk> > <mailto:phudec@cnc.sk <mailto:phudec@cnc.sk>>> > > *CNC, a.s.* > Borská 6, 841 04 Bratislava > Recepcia: +421 2 35 000 100 <tel:%2B421%202%C2%A0%2035%20000%20100> <tel:%2B421%202%C2%A0%2035%20000%20100> > > Mobil:+421 905 997 203 <tel:%2B421%C2%A0905%20997%20203> <tel:%2B421%C2%A0905%20997%20203> > *www.cnc.sk <http://www.cnc.sk> <http://www.cnc.sk>* <http:///www.cnc.sk <http://www.cnc.sk> > <http://www.cnc.sk>> > > > > > -- > Didi
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> <mailto:phudec@cnc.sk <mailto:phudec@cnc.sk>>
*CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 <tel:%2B421%202%C2%A0%2035%20000%20100>
Mobil:+421 905 997 203 <tel:%2B421%C2%A0905%20997%20203> *www.cnc.sk <http://www.cnc.sk>* <http:///www.cnc.sk <http://www.cnc.sk>>
-- Didi
-- *Peter Hudec* Infraštruktúrny architekt phudec@cnc.sk <mailto:phudec@cnc.sk> *CNC, a.s.* Borská 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* <http:///www.cnc.sk>
participants (3)
-
Martin Perina
-
Peter Hudec
-
Yedidyah Bar David