How to migrate Self Hosted Engine

Hello, I have an Hyperconverged gluster cluster with SEH and 3 hosts born in 4.0.5. The installation was done starting from ovirt01 (named hosted_engine_1 in webadmin gui) and then deploying two more hosts: ovirt02 and ovirt03 from web admin gui itself. All seems ok. I can migrate a normal VM from one host to another one and nice that I don't loose console now. But if I try from webadmin gui to migrate the self hosted engine I get this message: https://drive.google.com/file/d/0BwoPbcrMv8mvY3pURVRkX0p4OW8/view?usp=sharin... Is this because the only way to migrate engine is to put its hosting host to maintenance or is there anything wrong? I don't understand the message: The host ovirt02.localdomain.local did not satisfy internal filter HA because it is not a Hosted Engine host.. some commands executed on hosted_engine_1 (ovirt01): [root@ovirt01 ~]# vdsClient -s 0 glusterHostsList {'hosts': [{'hostname': '10.10.100.102/24', 'status': 'CONNECTED', 'uuid': 'e9717281-a356-42aa-a579-a4647a29a0bc'}, {'hostname': 'ovirt03.localdomain.local', 'status': 'CONNECTED', 'uuid': 'ec81a04c-a19c-4d31-9d82-7543cefe79f3'}, {'hostname': 'ovirt02.localdomain.local', 'status': 'CONNECTED', 'uuid': 'b89311fe-257f-4e44-8e15-9bff6245d689'}], 'status': {'code': 0, 'message': 'Done'}} Done [root@ovirt01 ~]# vdsClient -s 0 list 87fd6bdb-535d-45b8-81d4-7e3101a6c364 Status = Up nicModel = rtl8139,pv statusTime = 4691827920 emulatedMachine = pc pid = 18217 vmName = HostedEngine devices = [{'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId': '08628a0d-1c2a-43e9-8820-4c02f14d04e9', 'alias': 'console0'}, {'device': 'memballoon', 'specParams': {'model': 'none'}, 'type': 'balloon', 'alias': 'balloon0'}, {'alias': 'rng0', 'specParams': {'source': 'random'}, 'address': {'slot': '0x07', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'virtio', 'model': 'virtio', 'type': 'rng'}, {'device': 'scsi', 'alias': 'scsi0', 'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vga', 'alias': 'video0', 'type': 'video', 'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vnc', 'specParams': {'spiceSecureChannels': 'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir', 'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv', 'macAddr': '00:16:3e:0a:e7:ba', 'linkActive': True, 'network': 'ovirtmgmt', 'alias': 'net0', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': '79a745a0-e691-4a3d-8d6b-c94306db9113', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0', 'specParams': {}, 'readonly': 'True', 'deviceId': '6be25e51-0944-4fc0-93fe-4ecabe32ac6b', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'poolID': '00000000-0000-0000-0000-000000000000', 'reqsize': '0', 'index': '0', 'iface': 'virtio', 'apparentsize': '10737418240', 'alias': 'virtio-disk0', 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'readonly': 'False', 'shared': 'exclusive', 'truesize': '3395743744', 'type': 'disk', 'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab', 'volumeInfo': {'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab', 'volType': 'path', 'leaseOffset': 0, 'volumeID': '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'leasePath': '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6.lease', 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'path': '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6'}, 'format': 'raw', 'deviceId': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'path': '/var/run/vdsm/storage/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder': '1', 'volumeID': '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'specParams': {}, 'volumeChain': [{'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab', 'volType': 'path', 'leaseOffset': 0, 'volumeID': '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'leasePath': '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6.lease', 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'path': '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6'}]}, {'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot': '0x05', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}, {'device': 'unix', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'}}] guestDiskMapping = {'cf8b8f4e-fa01-457e-8': {'name': '/dev/vda'}, 'QEMU_DVD-ROM_QM00003': {'name': '/dev/sr0'}} vmType = kvm clientIp = displaySecurePort = -1 memSize = 6144 displayPort = 5900 cpuType = Broadwell spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir smp = 1 displayIp = 0 display = vnc pauseCode = NOERR maxVCpus = 2 [root@ovirt01 ~]# Thanks Gianluca

On Sun, Nov 20, 2016 at 2:54 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, I have an Hyperconverged gluster cluster with SEH and 3 hosts born in 4.0.5. The installation was done starting from ovirt01 (named hosted_engine_1 in webadmin gui) and then deploying two more hosts: ovirt02 and ovirt03 from web admin gui itself. All seems ok. I can migrate a normal VM from one host to another one and nice that I don't loose console now. But if I try from webadmin gui to migrate the self hosted engine I get this message:
https://drive.google.com/file/d/0BwoPbcrMv8mvY3pURVRkX0p4OW8/ view?usp=sharing
Is this because the only way to migrate engine is to put its hosting host to maintenance or is there anything wrong? I don't understand the message:
The host ovirt02.localdomain.local did not satisfy internal filter HA because it is not a Hosted Engine host..
some commands executed on hosted_engine_1 (ovirt01): [root@ovirt01 ~]# vdsClient -s 0 glusterHostsList {'hosts': [{'hostname': '10.10.100.102/24', 'status': 'CONNECTED', 'uuid': 'e9717281-a356-42aa-a579-a4647a29a0bc'}, {'hostname': 'ovirt03.localdomain.local', 'status': 'CONNECTED', 'uuid': 'ec81a04c-a19c-4d31-9d82-7543cefe79f3'}, {'hostname': 'ovirt02.localdomain.local', 'status': 'CONNECTED', 'uuid': 'b89311fe-257f-4e44-8e15-9bff6245d689'}], 'status': {'code': 0, 'message': 'Done'}} Done
[root@ovirt01 ~]# vdsClient -s 0 list
87fd6bdb-535d-45b8-81d4-7e3101a6c364 Status = Up nicModel = rtl8139,pv statusTime = 4691827920 emulatedMachine = pc pid = 18217 vmName = HostedEngine devices = [{'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId': '08628a0d-1c2a-43e9-8820-4c02f14d04e9', 'alias': 'console0'}, {'device': 'memballoon', 'specParams': {'model': 'none'}, 'type': 'balloon', 'alias': 'balloon0'}, {'alias': 'rng0', 'specParams': {'source': 'random'}, 'address': {'slot': '0x07', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'virtio', 'model': 'virtio', 'type': 'rng'}, {'device': 'scsi', 'alias': 'scsi0', 'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vga', 'alias': 'video0', 'type': 'video', 'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vnc', 'specParams': {'spiceSecureChannels': 'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir', 'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv', 'macAddr': '00:16:3e:0a:e7:ba', 'linkActive': True, 'network': 'ovirtmgmt', 'alias': 'net0', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': '79a745a0-e691-4a3d-8d6b-c94306db9113', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0', 'specParams': {}, 'readonly': 'True', 'deviceId': '6be25e51-0944-4fc0-93fe-4ecabe32ac6b', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'poolID': '00000000-0000-0000-0000-000000000000', 'reqsize': '0', 'index': '0', 'iface': 'virtio', 'apparentsize': '10737418240', 'alias': 'virtio-disk0', 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'readonly': 'False', 'shared': 'exclusive', 'truesize': '3395743744', 'type': 'disk', 'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab', 'volumeInfo': {'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab', 'volType': 'path', 'leaseOffset': 0, 'volumeID': '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'leasePath': '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain. local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/ cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6.lease', 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'path': '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain. local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/ cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6'}, 'format': 'raw', 'deviceId': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'path': '/var/run/vdsm/storage/ e9e4a478-f391-42e5-9bb8-ed22a33e5cab/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/ 94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder': '1', 'volumeID': '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'specParams': {}, 'volumeChain': [{'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab', 'volType': 'path', 'leaseOffset': 0, 'volumeID': '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'leasePath': '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain. local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/ cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6.lease', 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'path': '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain. local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/ cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6'}]}, {'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot': '0x05', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}, {'device': 'unix', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'}}] guestDiskMapping = {'cf8b8f4e-fa01-457e-8': {'name': '/dev/vda'}, 'QEMU_DVD-ROM_QM00003': {'name': '/dev/sr0'}} vmType = kvm clientIp = displaySecurePort = -1 memSize = 6144 displayPort = 5900 cpuType = Broadwell spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord, ssmartcard,susbredir smp = 1 displayIp = 0 display = vnc pauseCode = NOERR maxVCpus = 2 [root@ovirt01 ~]#
Thanks Gianluca
Otehr infos: 1) time is ok between 3 hosts 2) hosted-engine --vm-status gives this [root@ovirt01 ~]# hosted-engine --vm-status --== Host 1 status ==-- Status up-to-date : True Hostname : ovirt01.localdomain.local Host ID : 1 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : a12c7427 Host timestamp : 397487 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=397487 (Sun Nov 20 14:59:38 2016) host-id=1 score=3400 maintenance=False state=EngineUp stopped=False [root@ovirt01 ~]# [root@ovirt02 ~]# hosted-engine --vm-status You must run deploy first [root@ovirt02 ~]# [root@ovirt03 ~]# hosted-engine --vm-status You must run deploy first [root@ovirt03 ~]# So it seems there is something wrong with the setup. I thought that deploying from web admin gui would have put the hosts into the same environment.....

On Sun, Nov 20, 2016 at 4:02 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sun, Nov 20, 2016 at 2:54 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, I have an Hyperconverged gluster cluster with SEH and 3 hosts born in 4.0.5. The installation was done starting from ovirt01 (named hosted_engine_1 in webadmin gui) and then deploying two more hosts: ovirt02 and ovirt03 from web admin gui itself. All seems ok. I can migrate a normal VM from one host to another one and nice that I don't loose console now. But if I try from webadmin gui to migrate the self hosted engine I get this message:
https://drive.google.com/file/d/0BwoPbcrMv8mvY3pURVRkX0p4OW8/view?usp=sharin...
Is this because the only way to migrate engine is to put its hosting host to maintenance or is there anything wrong? I don't understand the message:
The host ovirt02.localdomain.local did not satisfy internal filter HA because it is not a Hosted Engine host..
some commands executed on hosted_engine_1 (ovirt01): [root@ovirt01 ~]# vdsClient -s 0 glusterHostsList {'hosts': [{'hostname': '10.10.100.102/24', 'status': 'CONNECTED', 'uuid': 'e9717281-a356-42aa-a579-a4647a29a0bc'}, {'hostname': 'ovirt03.localdomain.local', 'status': 'CONNECTED', 'uuid': 'ec81a04c-a19c-4d31-9d82-7543cefe79f3'}, {'hostname': 'ovirt02.localdomain.local', 'status': 'CONNECTED', 'uuid': 'b89311fe-257f-4e44-8e15-9bff6245d689'}], 'status': {'code': 0, 'message': 'Done'}} Done
[root@ovirt01 ~]# vdsClient -s 0 list
87fd6bdb-535d-45b8-81d4-7e3101a6c364 Status = Up nicModel = rtl8139,pv statusTime = 4691827920 emulatedMachine = pc pid = 18217 vmName = HostedEngine devices = [{'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId': '08628a0d-1c2a-43e9-8820-4c02f14d04e9', 'alias': 'console0'}, {'device': 'memballoon', 'specParams': {'model': 'none'}, 'type': 'balloon', 'alias': 'balloon0'}, {'alias': 'rng0', 'specParams': {'source': 'random'}, 'address': {'slot': '0x07', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'virtio', 'model': 'virtio', 'type': 'rng'}, {'device': 'scsi', 'alias': 'scsi0', 'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vga', 'alias': 'video0', 'type': 'video', 'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vnc', 'specParams': {'spiceSecureChannels': 'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir', 'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv', 'macAddr': '00:16:3e:0a:e7:ba', 'linkActive': True, 'network': 'ovirtmgmt', 'alias': 'net0', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': '79a745a0-e691-4a3d-8d6b-c94306db9113', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0', 'specParams': {}, 'readonly': 'True', 'deviceId': '6be25e51-0944-4fc0-93fe-4ecabe32ac6b', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'poolID': '00000000-0000-0000-0000-000000000000', 'reqsize': '0', 'index': '0', 'iface': 'virtio', 'apparentsize': '10737418240', 'alias': 'virtio-disk0', 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'readonly': 'False', 'shared': 'exclusive', 'truesize': '3395743744', 'type': 'disk', 'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab', 'volumeInfo': {'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab', 'volType': 'path', 'leaseOffset': 0, 'volumeID': '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'leasePath': '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6.lease', 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'path': '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6'}, 'format': 'raw', 'deviceId': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'path': '/var/run/vdsm/storage/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder': '1', 'volumeID': '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'specParams': {}, 'volumeChain': [{'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab', 'volType': 'path', 'leaseOffset': 0, 'volumeID': '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'leasePath': '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6.lease', 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'path': '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6'}]}, {'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot': '0x05', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}, {'device': 'unix', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'}}] guestDiskMapping = {'cf8b8f4e-fa01-457e-8': {'name': '/dev/vda'}, 'QEMU_DVD-ROM_QM00003': {'name': '/dev/sr0'}} vmType = kvm clientIp = displaySecurePort = -1 memSize = 6144 displayPort = 5900 cpuType = Broadwell spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir smp = 1 displayIp = 0 display = vnc pauseCode = NOERR maxVCpus = 2 [root@ovirt01 ~]#
Thanks Gianluca
Otehr infos:
1) time is ok between 3 hosts
2) hosted-engine --vm-status gives this
[root@ovirt01 ~]# hosted-engine --vm-status
--== Host 1 status ==--
Status up-to-date : True Hostname : ovirt01.localdomain.local Host ID : 1 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : a12c7427 Host timestamp : 397487 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=397487 (Sun Nov 20 14:59:38 2016) host-id=1 score=3400 maintenance=False state=EngineUp stopped=False [root@ovirt01 ~]#
[root@ovirt02 ~]# hosted-engine --vm-status You must run deploy first [root@ovirt02 ~]#
[root@ovirt03 ~]# hosted-engine --vm-status You must run deploy first [root@ovirt03 ~]#
So it seems there is something wrong with the setup. I thought that deploying from web admin gui would have put the hosts into the same environment.....
There is now an option for this in the gui, did you mark it? See also: https://bugzilla.redhat.com/show_bug.cgi?id=1167262 -- Didi

On Sun, Nov 20, 2016 at 4:08 PM, Yedidyah Bar David <didi@redhat.com> wrote:
There is now an option for this in the gui, did you mark it? See also:
Ah.. I see. In "New Host" window there is a section named "Hosted Engine" and it defaults to "None" and I was give that..: https://drive.google.com/file/d/0BwoPbcrMv8mvME9CVGFRLTB0b0k/view?usp=sharin... I didn't know it. I have verified I was able to put one host into maintenance (the only one running VM was automatically migrated) and then select "Reinstall" and in the proposed window I select now "Deploy" in similar hosted engine section: https://drive.google.com/file/d/0BwoPbcrMv8mvWTJMQXpwbHJYc00/view?usp=sharin... It seems ok now [root@ovirt02 ~]# hosted-engine --vm-status --== Host 1 status ==-- Status up-to-date : True Hostname : ovirt01.localdomain.local Host ID : 1 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 8e1ee066 Host timestamp : 429820 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=429820 (Sun Nov 20 23:58:31 2016) host-id=1 score=3400 maintenance=False state=EngineUp stopped=False --== Host 2 status ==-- Status up-to-date : True Hostname : 192.168.150.103 Host ID : 2 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : 839f79f5 Host timestamp : 429736 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=429736 (Sun Nov 20 23:58:37 2016) host-id=2 score=3400 maintenance=False state=EngineDown stopped=False [root@ovirt02 ~]# Is it? I was also able now to migrate the hosted engine vm to the second host and to connect without problems to its console. I'm going to change also the third host. Two notes: 1) it would be nice to pre-filter the drop down box when you have to choose the host where migrate the hosted engine... So that if there are no hosts available you will be given a related message without choice at all and if there is a subset of eligible hosts inside the cluster, you will be proposed only to choose one of them and not all the hosts inside the cluster. 2) If the gui option becomes the default and preferred way to deploy hosts in self hosted engine environments I think it should be put in clearer shape that if you follow the default action you would not have high availability for the hosted engine vm. Or changing the default action to "Deploy", or showing a popup if the hosted engine vm has only one host configured for it but there are other hosts in the cluster. Just my opinion. Thanks, Gianluca

On Mon, Nov 21, 2016 at 1:09 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sun, Nov 20, 2016 at 4:08 PM, Yedidyah Bar David <didi@redhat.com> wrote:
There is now an option for this in the gui, did you mark it? See also:
Ah.. I see. In "New Host" window there is a section named "Hosted Engine" and it defaults to "None" and I was give that..:
https://drive.google.com/file/d/0BwoPbcrMv8mvME9CVGFRLTB0b0k/view?usp=sharin...
I didn't know it. I have verified I was able to put one host into maintenance (the only one running VM was automatically migrated) and then select "Reinstall" and in the proposed window I select now "Deploy" in similar hosted engine section: https://drive.google.com/file/d/0BwoPbcrMv8mvWTJMQXpwbHJYc00/view?usp=sharin...
It seems ok now
Mostly, yes.
[root@ovirt02 ~]# hosted-engine --vm-status
--== Host 1 status ==--
Status up-to-date : True Hostname : ovirt01.localdomain.local Host ID : 1 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 8e1ee066 Host timestamp : 429820 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=429820 (Sun Nov 20 23:58:31 2016) host-id=1 score=3400 maintenance=False state=EngineUp stopped=False
--== Host 2 status ==--
Status up-to-date : True Hostname : 192.168.150.103
This is the address you provided in the ui, right? I suggest to use a fqdn and make it well-resolvable. It will then be easier to change the IP address if needed.
Host ID : 2 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : 839f79f5 Host timestamp : 429736 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=429736 (Sun Nov 20 23:58:37 2016) host-id=2 score=3400 maintenance=False state=EngineDown stopped=False [root@ovirt02 ~]#
Is it? I was also able now to migrate the hosted engine vm to the second host and to connect without problems to its console. I'm going to change also the third host.
Two notes: 1) it would be nice to pre-filter the drop down box when you have to choose the host where migrate the hosted engine... So that if there are no hosts available you will be given a related message without choice at all and if there is a subset of eligible hosts inside the cluster, you will be proposed only to choose one of them and not all the hosts inside the cluster.
Makes sense, please open an RFE to track this. Thanks.
2) If the gui option becomes the default and preferred way to deploy hosts in self hosted engine environments I think it should be put in clearer shape that if you follow the default action you would not have high availability for the hosted engine vm. Or changing the default action to "Deploy", or showing a popup if the hosted engine vm has only one host configured for it but there are other hosts in the cluster. Just my opinion.
Makes sense too, but I wonder if people will then be annoyed by forgetting to uncheck it even after having enough HA hosts, which does have its cost (both actual resource use and also reservations, IIUC). Perhaps we should enable by default and/or remind only until you have enough HA hosts, which can be a configurable number (and default e.g. to 3). Best regards, -- Didi

On Mon, Nov 21, 2016 at 8:54 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Mon, Nov 21, 2016 at 1:09 AM, Gianluca Cecchi
Two notes: 1) it would be nice to pre-filter the drop down box when you have to
the host where migrate the hosted engine... So that if there are no hosts available you will be given a related message without choice at all and if there is a subset of eligible hosts inside
choose the
cluster, you will be proposed only to choose one of them and not all the hosts inside the cluster.
Makes sense, please open an RFE to track this. Thanks.
Been busy these latest days... Opened now: https://bugzilla.redhat.com/show_bug.cgi?id=1399609
2) If the gui option becomes the default and preferred way to deploy
hosts
in self hosted engine environments I think it should be put in clearer shape that if you follow the default action you would not have high availability for the hosted engine vm. Or changing the default action to "Deploy", or showing a popup if the hosted engine vm has only one host configured for it but there are other hosts in the cluster. Just my opinion.
Makes sense too, but I wonder if people will then be annoyed by forgetting to uncheck it even after having enough HA hosts, which does have its cost (both actual resource use and also reservations, IIUC). Perhaps we should enable by default and/or remind only until you have enough HA hosts, which can be a configurable number (and default e.g. to 3).
and here: https://bugzilla.redhat.com/show_bug.cgi?id=1399613 Cheers, Gianluca

On Tue, Nov 29, 2016 at 2:27 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Mon, Nov 21, 2016 at 8:54 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Mon, Nov 21, 2016 at 1:09 AM, Gianluca Cecchi
Two notes: 1) it would be nice to pre-filter the drop down box when you have to choose the host where migrate the hosted engine... So that if there are no hosts available you will be given a related message without choice at all and if there is a subset of eligible hosts inside the cluster, you will be proposed only to choose one of them and not all the hosts inside the cluster.
Makes sense, please open an RFE to track this. Thanks.
Been busy these latest days... Opened now: https://bugzilla.redhat.com/show_bug.cgi?id=1399609
2) If the gui option becomes the default and preferred way to deploy hosts in self hosted engine environments I think it should be put in clearer shape that if you follow the default action you would not have high availability for the hosted engine vm. Or changing the default action to "Deploy", or showing a popup if the hosted engine vm has only one host configured for it but there are other hosts in the cluster. Just my opinion.
Makes sense too, but I wonder if people will then be annoyed by forgetting to uncheck it even after having enough HA hosts, which does have its cost (both actual resource use and also reservations, IIUC). Perhaps we should enable by default and/or remind only until you have enough HA hosts, which can be a configurable number (and default e.g. to 3).
and here: https://bugzilla.redhat.com/show_bug.cgi?id=1399613
Thanks! -- Didi
participants (2)
-
Gianluca Cecchi
-
Yedidyah Bar David