[ovirt-users] How to migrate Self Hosted Engine
Yedidyah Bar David
didi at redhat.com
Sun Nov 20 15:08:58 UTC 2016
On Sun, Nov 20, 2016 at 4:02 PM, Gianluca Cecchi
<gianluca.cecchi at gmail.com> wrote:
>
>
> On Sun, Nov 20, 2016 at 2:54 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com>
> wrote:
>>
>> Hello,
>> I have an Hyperconverged gluster cluster with SEH and 3 hosts born in
>> 4.0.5.
>> The installation was done starting from ovirt01 (named hosted_engine_1 in
>> webadmin gui) and then deploying two more hosts: ovirt02 and ovirt03 from
>> web admin gui itself.
>> All seems ok.
>> I can migrate a normal VM from one host to another one and nice that I
>> don't loose console now.
>> But if I try from webadmin gui to migrate the self hosted engine I get
>> this message:
>>
>>
>> https://drive.google.com/file/d/0BwoPbcrMv8mvY3pURVRkX0p4OW8/view?usp=sharing
>>
>> Is this because the only way to migrate engine is to put its hosting host
>> to maintenance or is there anything wrong?
>> I don't understand the message:
>>
>> The host ovirt02.localdomain.local did not satisfy internal filter HA
>> because it is not a Hosted Engine host..
>>
>>
>> some commands executed on hosted_engine_1 (ovirt01):
>> [root at ovirt01 ~]# vdsClient -s 0 glusterHostsList
>> {'hosts': [{'hostname': '10.10.100.102/24',
>> 'status': 'CONNECTED',
>> 'uuid': 'e9717281-a356-42aa-a579-a4647a29a0bc'},
>> {'hostname': 'ovirt03.localdomain.local',
>> 'status': 'CONNECTED',
>> 'uuid': 'ec81a04c-a19c-4d31-9d82-7543cefe79f3'},
>> {'hostname': 'ovirt02.localdomain.local',
>> 'status': 'CONNECTED',
>> 'uuid': 'b89311fe-257f-4e44-8e15-9bff6245d689'}],
>> 'status': {'code': 0, 'message': 'Done'}}
>> Done
>>
>> [root at ovirt01 ~]# vdsClient -s 0 list
>>
>> 87fd6bdb-535d-45b8-81d4-7e3101a6c364
>> Status = Up
>> nicModel = rtl8139,pv
>> statusTime = 4691827920
>> emulatedMachine = pc
>> pid = 18217
>> vmName = HostedEngine
>> devices = [{'device': 'console', 'specParams': {}, 'type': 'console',
>> 'deviceId': '08628a0d-1c2a-43e9-8820-4c02f14d04e9', 'alias': 'console0'},
>> {'device': 'memballoon', 'specParams': {'model': 'none'}, 'type': 'balloon',
>> 'alias': 'balloon0'}, {'alias': 'rng0', 'specParams': {'source': 'random'},
>> 'address': {'slot': '0x07', 'bus': '0x00', 'domain': '0x0000', 'type':
>> 'pci', 'function': '0x0'}, 'device': 'virtio', 'model': 'virtio', 'type':
>> 'rng'}, {'device': 'scsi', 'alias': 'scsi0', 'model': 'virtio-scsi', 'type':
>> 'controller', 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000',
>> 'type': 'pci', 'function': '0x0'}}, {'device': 'vga', 'alias': 'video0',
>> 'type': 'video', 'address': {'slot': '0x02', 'bus': '0x00', 'domain':
>> '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vnc',
>> 'specParams': {'spiceSecureChannels':
>> 'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
>> 'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv',
>> 'macAddr': '00:16:3e:0a:e7:ba', 'linkActive': True, 'network': 'ovirtmgmt',
>> 'alias': 'net0', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {},
>> 'deviceId': '79a745a0-e691-4a3d-8d6b-c94306db9113', 'address': {'slot':
>> '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function':
>> '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index':
>> '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0', 'specParams': {},
>> 'readonly': 'True', 'deviceId': '6be25e51-0944-4fc0-93fe-4ecabe32ac6b',
>> 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0',
>> 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type':
>> 'disk'}, {'poolID': '00000000-0000-0000-0000-000000000000', 'reqsize': '0',
>> 'index': '0', 'iface': 'virtio', 'apparentsize': '10737418240', 'alias':
>> 'virtio-disk0', 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8',
>> 'readonly': 'False', 'shared': 'exclusive', 'truesize': '3395743744',
>> 'type': 'disk', 'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab',
>> 'volumeInfo': {'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab',
>> 'volType': 'path', 'leaseOffset': 0, 'volumeID':
>> '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'leasePath':
>> '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6.lease',
>> 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'path':
>> '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6'},
>> 'format': 'raw', 'deviceId': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8',
>> 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type':
>> 'pci', 'function': '0x0'}, 'device': 'disk', 'path':
>> '/var/run/vdsm/storage/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6',
>> 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder':
>> '1', 'volumeID': '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'specParams': {},
>> 'volumeChain': [{'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab',
>> 'volType': 'path', 'leaseOffset': 0, 'volumeID':
>> '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'leasePath':
>> '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6.lease',
>> 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'path':
>> '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6'}]},
>> {'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address': {'slot':
>> '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function':
>> '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type': 'controller', 'address':
>> {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci',
>> 'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 'virtio-serial0',
>> 'type': 'controller', 'address': {'slot': '0x05', 'bus': '0x00', 'domain':
>> '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'unix', 'alias':
>> 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0',
>> 'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias':
>> 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0',
>> 'type': 'virtio-serial', 'port': '2'}}, {'device': 'unix', 'alias':
>> 'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0',
>> 'type': 'virtio-serial', 'port': '3'}}]
>> guestDiskMapping = {'cf8b8f4e-fa01-457e-8': {'name': '/dev/vda'},
>> 'QEMU_DVD-ROM_QM00003': {'name': '/dev/sr0'}}
>> vmType = kvm
>> clientIp =
>> displaySecurePort = -1
>> memSize = 6144
>> displayPort = 5900
>> cpuType = Broadwell
>> spiceSecureChannels =
>> smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
>> smp = 1
>> displayIp = 0
>> display = vnc
>> pauseCode = NOERR
>> maxVCpus = 2
>> [root at ovirt01 ~]#
>>
>> Thanks
>> Gianluca
>
>
>
> Otehr infos:
>
> 1) time is ok between 3 hosts
>
> 2) hosted-engine --vm-status gives this
>
> [root at ovirt01 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> Status up-to-date : True
> Hostname : ovirt01.localdomain.local
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 3400
> stopped : False
> Local maintenance : False
> crc32 : a12c7427
> Host timestamp : 397487
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=397487 (Sun Nov 20 14:59:38 2016)
> host-id=1
> score=3400
> maintenance=False
> state=EngineUp
> stopped=False
> [root at ovirt01 ~]#
>
>
> [root at ovirt02 ~]# hosted-engine --vm-status
> You must run deploy first
> [root at ovirt02 ~]#
>
> [root at ovirt03 ~]# hosted-engine --vm-status
> You must run deploy first
> [root at ovirt03 ~]#
>
>
> So it seems there is something wrong with the setup.
> I thought that deploying from web admin gui would have put the hosts into
> the same environment.....
There is now an option for this in the gui, did you mark it? See also:
https://bugzilla.redhat.com/show_bug.cgi?id=1167262
--
Didi
More information about the Users
mailing list