ovirt 4.2.7 nested not importing she domain

Hello, I'm configuring a nested self hosted engine environment with 4.2.7 and CentOS 7.5. Domain type is NFS. I deployed with hosted-engine --deploy --noansible All went apparently good but after creating the master storage domain I see that the hosted engine domain is not automatically imported At the moment I have only one host. ovirt-ha-agent status gives every 10 seconds: Nov 09 00:36:30 ovirtdemo01.localdomain.local ovirt-ha-agent[18407]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs In engine.log I see every 15 seconds a dumpxml output ad the message: 2018-11-09 00:31:52,822+01 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] null architecture type, replacing with x86_64, VM [HostedEngine] see full below. Any hint? Thanks Gianluca 2018-11-09 00:31:52,714+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [] VM '21c5fe9f-cd46-49fd-a6f3-009b4d450894' was discovered as 'Up' on VDS '4de40432-c1f7-4f20-b231-347095015fbd'(ovirtdemo01.localdomain.local) 2018-11-09 00:31:52,764+01 INFO [org.ovirt.engine.core.bll.AddUnmanagedVmsCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] Running command: AddUnmanagedVmsCommand internal: true. 2018-11-09 00:31:52,766+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] START, DumpXmlsVDSCommand(HostName = ovirtdemo01.localdomain.local, Params:{hostId='4de40432-c1f7-4f20-b231-347095015fbd', vmIds='[21c5fe9f-cd46-49fd-a6f3-009b4d450894]'}), log id: 5d5a0a63 2018-11-09 00:31:52,775+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] FINISH, DumpXmlsVDSCommand, return: {21c5fe9f-cd46-49fd-a6f3-009b4d450894=<domain type='kvm' id='2'> <name>HostedEngine</name> <uuid>21c5fe9f-cd46-49fd-a6f3-009b4d450894</uuid> <metadata xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm=" http://ovirt.org/vm/1.0"> <ovirt-tune:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:memGuaranteedSize type="int">0</ovirt-vm:memGuaranteedSize> <ovirt-vm:startTime type="float">1541719799.6</ovirt-vm:startTime> <ovirt-vm:device devtype="console" name="console0"> <ovirt-vm:deviceId>2e8944d3-7ac4-4597-8883-c0b2937fb23b</ovirt-vm:deviceId> <ovirt-vm:specParams/> <ovirt-vm:vm_custom/> </ovirt-vm:device> <ovirt-vm:device mac_address="00:16:3e:35:d9:2c"> <ovirt-vm:deviceId>0f1d0ce3-8843-4418-b882-0d84ca481717</ovirt-vm:deviceId> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:specParams/> <ovirt-vm:vm_custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="hdc"> <ovirt-vm:deviceId>6d167005-b547-4190-938f-ce1b82eae7af</ovirt-vm:deviceId> <ovirt-vm:shared>false</ovirt-vm:shared> <ovirt-vm:specParams/> <ovirt-vm:vm_custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="vda"> <ovirt-vm:deviceId>8eb98007-1d9a-4689-bbab-b3c7060efef8</ovirt-vm:deviceId> <ovirt-vm:domainID>fbcfb922-0103-43fb-a2b6-2bf0c9e356ea</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName> <ovirt-vm:imageID>8eb98007-1d9a-4689-bbab-b3c7060efef8</ovirt-vm:imageID> <ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID> <ovirt-vm:shared>exclusive</ovirt-vm:shared> <ovirt-vm:volumeID>64bdb7cd-60a1-4420-b3a6-607b20e2cd5a</ovirt-vm:volumeID> <ovirt-vm:specParams/> <ovirt-vm:vm_custom/> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>fbcfb922-0103-43fb-a2b6-2bf0c9e356ea</ovirt-vm:domainID> <ovirt-vm:imageID>8eb98007-1d9a-4689-bbab-b3c7060efef8</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/rhev/data-center/mnt/ovirtdemo01.localdomain.local:_SHE__DOMAIN/fbcfb922-0103-43fb-a2b6-2bf0c9e356ea/images/8eb98007-1d9a-4689-bbab-b3c7060efef8/64bdb7cd-60a1-4420-b3a6-607b20e2cd5a.lease</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/ovirtdemo01.localdomain.local:_SHE__DOMAIN/fbcfb922-0103-43fb-a2b6-2bf0c9e356ea/images/8eb98007-1d9a-4689-bbab-b3c7060efef8/64bdb7cd-60a1-4420-b3a6-607b20e2cd5a</ovirt-vm:path> <ovirt-vm:volumeID>64bdb7cd-60a1-4420-b3a6-607b20e2cd5a</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> </ovirt-vm:vm> </metadata> <memory unit='KiB'>6270976</memory> <currentMemory unit='KiB'>6270976</currentMemory> <vcpu placement='static' current='1'>2</vcpu> <cputune> <shares>1020</shares> </cputune> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>7-5.1804.5.el7.centos</entry> <entry name='serial'>2820BD92-2B2B-42C5-912B-76FB65E93FBF</entry> <entry name='uuid'>21c5fe9f-cd46-49fd-a6f3-009b4d450894</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.5.0'>hvm</type> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Skylake-Client</model> <feature policy='require' name='hypervisor'/> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>destroy</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver error_policy='stop'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='file' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/> <source file='/var/run/vdsm/storage/fbcfb922-0103-43fb-a2b6-2bf0c9e356ea/8eb98007-1d9a-4689-bbab-b3c7060efef8/64bdb7cd-60a1-4420-b3a6-607b20e2cd5a'/> <backingStore/> <target dev='vda' bus='virtio'/> <serial>8eb98007-1d9a-4689-bbab-b3c7060efef8</serial> <boot order='1'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0' model='piix3-uhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <lease> <lockspace>fbcfb922-0103-43fb-a2b6-2bf0c9e356ea</lockspace> <key>64bdb7cd-60a1-4420-b3a6-607b20e2cd5a</key> <target path='/rhev/data-center/mnt/ovirtdemo01.localdomain.local:_SHE__DOMAIN/fbcfb922-0103-43fb-a2b6-2bf0c9e356ea/images/8eb98007-1d9a-4689-bbab-b3c7060efef8/64bdb7cd-60a1-4420-b3a6-607b20e2cd5a.lease'/> </lease> <interface type='bridge'> <mac address='00:16:3e:35:d9:2c'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <link state='up'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='virtio' port='0'/> <alias name='console0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/21c5fe9f-cd46-49fd-a6f3-009b4d450894.com.redhat.rhevm.vdsm'/> <target type='virtio' name='com.redhat.rhevm.vdsm' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/21c5fe9f-cd46-49fd-a6f3-009b4d450894.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/21c5fe9f-cd46-49fd-a6f3-009b4d450894.org.ovirt.hosted-engine-setup.0'/> <target type='virtio' name='org.ovirt.hosted-engine-setup.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='0' passwdValidTo='1970-01-01T00:00:01'> <listen type='address' address='0'/> </graphics> <video> <model type='vga' vram='32768' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='none'> <alias name='balloon0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='rng0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c500,c542</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c500,c542</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> </domain> }, log id: 5d5a0a63 2018-11-09 00:31:52,822+01 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] null architecture type, replacing with x86_64, VM [HostedEngine]

On Fri, Nov 9, 2018 at 12:45 AM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, I'm configuring a nested self hosted engine environment with 4.2.7 and CentOS 7.5. Domain type is NFS. I deployed with
hosted-engine --deploy --noansible
All went apparently good but after creating the master storage domain I see that the hosted engine domain is not automatically imported At the moment I have only one host.
ovirt-ha-agent status gives every 10 seconds: Nov 09 00:36:30 ovirtdemo01.localdomain.local ovirt-ha-agent[18407]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs
In engine.log I see every 15 seconds a dumpxml output ad the message:
2018-11-09 00:31:52,822+01 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] null architecture type, replacing with x86_64, VM [HostedEngine]
see full below.
Any hint?
Hi Gianluca, unfortunately it's a known regression: it's currently tracked here https://bugzilla.redhat.com/1639604 In the mean time I'd suggest to use the new ansible flow witch is not affected by this issue or deploy with an engine-appliance shipped before 4.2.5 completing the upgrade on engine side only when everything is there as expected.
Thanks Gianluca
2018-11-09 00:31:52,714+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [] VM '21c5fe9f-cd46-49fd-a6f3-009b4d450894' was discovered as 'Up' on VDS '4de40432-c1f7-4f20-b231-347095015fbd'(ovirtdemo01.localdomain.local) 2018-11-09 00:31:52,764+01 INFO [org.ovirt.engine.core.bll.AddUnmanagedVmsCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] Running command: AddUnmanagedVmsCommand internal: true. 2018-11-09 00:31:52,766+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] START, DumpXmlsVDSCommand(HostName = ovirtdemo01.localdomain.local, Params:{hostId='4de40432-c1f7-4f20-b231-347095015fbd', vmIds='[21c5fe9f-cd46-49fd-a6f3-009b4d450894]'}), log id: 5d5a0a63 2018-11-09 00:31:52,775+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] FINISH, DumpXmlsVDSCommand, return: {21c5fe9f-cd46-49fd-a6f3-009b4d450894=<domain type='kvm' id='2'> <name>HostedEngine</name> <uuid>21c5fe9f-cd46-49fd-a6f3-009b4d450894</uuid> <metadata xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-tune:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:memGuaranteedSize type="int">0</ovirt-vm:memGuaranteedSize> <ovirt-vm:startTime type="float">1541719799.6</ovirt-vm:startTime> <ovirt-vm:device devtype="console" name="console0">
<ovirt-vm:deviceId>2e8944d3-7ac4-4597-8883-c0b2937fb23b</ovirt-vm:deviceId> <ovirt-vm:specParams/> <ovirt-vm:vm_custom/> </ovirt-vm:device> <ovirt-vm:device mac_address="00:16:3e:35:d9:2c">
<ovirt-vm:deviceId>0f1d0ce3-8843-4418-b882-0d84ca481717</ovirt-vm:deviceId> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:specParams/> <ovirt-vm:vm_custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="hdc">
<ovirt-vm:deviceId>6d167005-b547-4190-938f-ce1b82eae7af</ovirt-vm:deviceId> <ovirt-vm:shared>false</ovirt-vm:shared> <ovirt-vm:specParams/> <ovirt-vm:vm_custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="vda">
<ovirt-vm:deviceId>8eb98007-1d9a-4689-bbab-b3c7060efef8</ovirt-vm:deviceId>
<ovirt-vm:domainID>fbcfb922-0103-43fb-a2b6-2bf0c9e356ea</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>
<ovirt-vm:imageID>8eb98007-1d9a-4689-bbab-b3c7060efef8</ovirt-vm:imageID>
<ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID> <ovirt-vm:shared>exclusive</ovirt-vm:shared>
<ovirt-vm:volumeID>64bdb7cd-60a1-4420-b3a6-607b20e2cd5a</ovirt-vm:volumeID> <ovirt-vm:specParams/> <ovirt-vm:vm_custom/> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode>
<ovirt-vm:domainID>fbcfb922-0103-43fb-a2b6-2bf0c9e356ea</ovirt-vm:domainID>
<ovirt-vm:imageID>8eb98007-1d9a-4689-bbab-b3c7060efef8</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
<ovirt-vm:leasePath>/rhev/data-center/mnt/ovirtdemo01.localdomain.local:_SHE__DOMAIN/fbcfb922-0103-43fb-a2b6-2bf0c9e356ea/images/8eb98007-1d9a-4689-bbab-b3c7060efef8/64bdb7cd-60a1-4420-b3a6-607b20e2cd5a.lease</ovirt-vm:leasePath>
<ovirt-vm:path>/rhev/data-center/mnt/ovirtdemo01.localdomain.local:_SHE__DOMAIN/fbcfb922-0103-43fb-a2b6-2bf0c9e356ea/images/8eb98007-1d9a-4689-bbab-b3c7060efef8/64bdb7cd-60a1-4420-b3a6-607b20e2cd5a</ovirt-vm:path>
<ovirt-vm:volumeID>64bdb7cd-60a1-4420-b3a6-607b20e2cd5a</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> </ovirt-vm:vm> </metadata> <memory unit='KiB'>6270976</memory> <currentMemory unit='KiB'>6270976</currentMemory> <vcpu placement='static' current='1'>2</vcpu> <cputune> <shares>1020</shares> </cputune> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>7-5.1804.5.el7.centos</entry> <entry name='serial'>2820BD92-2B2B-42C5-912B-76FB65E93FBF</entry> <entry name='uuid'>21c5fe9f-cd46-49fd-a6f3-009b4d450894</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.5.0'>hvm</type> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Skylake-Client</model> <feature policy='require' name='hypervisor'/> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>destroy</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver error_policy='stop'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='file' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/> <source file='/var/run/vdsm/storage/fbcfb922-0103-43fb-a2b6-2bf0c9e356ea/8eb98007-1d9a-4689-bbab-b3c7060efef8/64bdb7cd-60a1-4420-b3a6-607b20e2cd5a'/> <backingStore/> <target dev='vda' bus='virtio'/> <serial>8eb98007-1d9a-4689-bbab-b3c7060efef8</serial> <boot order='1'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0' model='piix3-uhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <lease> <lockspace>fbcfb922-0103-43fb-a2b6-2bf0c9e356ea</lockspace> <key>64bdb7cd-60a1-4420-b3a6-607b20e2cd5a</key> <target path='/rhev/data-center/mnt/ovirtdemo01.localdomain.local:_SHE__DOMAIN/fbcfb922-0103-43fb-a2b6-2bf0c9e356ea/images/8eb98007-1d9a-4689-bbab-b3c7060efef8/64bdb7cd-60a1-4420-b3a6-607b20e2cd5a.lease'/> </lease> <interface type='bridge'> <mac address='00:16:3e:35:d9:2c'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <link state='up'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='virtio' port='0'/> <alias name='console0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/21c5fe9f-cd46-49fd-a6f3-009b4d450894.com.redhat.rhevm.vdsm'/> <target type='virtio' name='com.redhat.rhevm.vdsm' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/21c5fe9f-cd46-49fd-a6f3-009b4d450894.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/21c5fe9f-cd46-49fd-a6f3-009b4d450894.org.ovirt.hosted-engine-setup.0'/> <target type='virtio' name='org.ovirt.hosted-engine-setup.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='0' passwdValidTo='1970-01-01T00:00:01'> <listen type='address' address='0'/> </graphics> <video> <model type='vga' vram='32768' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='none'> <alias name='balloon0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='rng0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c500,c542</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c500,c542</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> </domain> }, log id: 5d5a0a63 2018-11-09 00:31:52,822+01 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] null architecture type, replacing with x86_64, VM [HostedEngine]
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DOJENTCWFPFQGD...

On Fri, Nov 9, 2018 at 11:28 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Fri, Nov 9, 2018 at 12:45 AM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, I'm configuring a nested self hosted engine environment with 4.2.7 and CentOS 7.5. Domain type is NFS. I deployed with
hosted-engine --deploy --noansible
All went apparently good but after creating the master storage domain I see that the hosted engine domain is not automatically imported At the moment I have only one host.
ovirt-ha-agent status gives every 10 seconds: Nov 09 00:36:30 ovirtdemo01.localdomain.local ovirt-ha-agent[18407]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs
In engine.log I see every 15 seconds a dumpxml output ad the message:
2018-11-09 00:31:52,822+01 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] null architecture type, replacing with x86_64, VM [HostedEngine]
see full below.
Any hint?
Hi Gianluca, unfortunately it's a known regression: it's currently tracked here https://bugzilla.redhat.com/1639604
In the mean time I'd suggest to use the new ansible flow witch is not affected by this issue or deploy with an engine-appliance shipped before 4.2.5 completing the upgrade on engine side only when everything is there as expected.
Thanks Simone, I scratched and reinstalled using the 4.2. appliance and the default option (with ansible) executing the command: hosted-engine --deploy It completed without errors and the hosted_engine storage domain and the HostedEngine inside it were already visible, without the former dependency to create a data domain Gianluca

Hi,
It completed without errors and the hosted_engine storage domain and the HostedEngine inside it were already visible, without the former dependency to create a data domain
Glad it works for you. This is indeed one of the few small improvements in the new deployment procedure :) We do not recommend using the old procedure anymore unless there is something special that does not work there. In other words, try ansible first from now on. Best regards -- Martin Sivak HE ex-maintainer :) On Fri, Nov 9, 2018 at 1:56 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, Nov 9, 2018 at 11:28 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Fri, Nov 9, 2018 at 12:45 AM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, I'm configuring a nested self hosted engine environment with 4.2.7 and CentOS 7.5. Domain type is NFS. I deployed with
hosted-engine --deploy --noansible
All went apparently good but after creating the master storage domain I see that the hosted engine domain is not automatically imported At the moment I have only one host.
ovirt-ha-agent status gives every 10 seconds: Nov 09 00:36:30 ovirtdemo01.localdomain.local ovirt-ha-agent[18407]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs
In engine.log I see every 15 seconds a dumpxml output ad the message:
2018-11-09 00:31:52,822+01 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] null architecture type, replacing with x86_64, VM [HostedEngine]
see full below.
Any hint?
Hi Gianluca, unfortunately it's a known regression: it's currently tracked here https://bugzilla.redhat.com/1639604
In the mean time I'd suggest to use the new ansible flow witch is not affected by this issue or deploy with an engine-appliance shipped before 4.2.5 completing the upgrade on engine side only when everything is there as expected.
Thanks Simone, I scratched and reinstalled using the 4.2. appliance and the default option (with ansible) executing the command:
hosted-engine --deploy
Gianluca
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/N6XCZ3TZPCYVC4...

On Fri, Nov 9, 2018 at 2:19 PM Martin Sivak <msivak@redhat.com> wrote:
Hi,
It completed without errors and the hosted_engine storage domain and the HostedEngine inside it were already visible, without the former dependency to create a data domain
Glad it works for you.
This is indeed one of the few small improvements in the new deployment procedure :) We do not recommend using the old procedure anymore unless there is something special that does not work there. In other words, try ansible first from now on.
Exactly: the "vintage" procedure is deprecated and we are also going to remove it in 4.3. Please notice that now, in addition to the interactive hosted-engine-setup (from CLI and from cockpit GUI), we also have a pure ansible role that can be executed by itself or combined with other ansible roles for automated deployments or to create complex and richer environment with a single ansible playbook. The project is here: https://github.com/oVirt/ovirt-ansible-hosted-engine-setup while its artifacts are distributed as rpms or via ansible galaxy.
Best regards
-- Martin Sivak HE ex-maintainer :)
On Fri, Nov 9, 2018 at 1:56 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, Nov 9, 2018 at 11:28 AM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Fri, Nov 9, 2018 at 12:45 AM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, I'm configuring a nested self hosted engine environment with 4.2.7 and CentOS 7.5. Domain type is NFS. I deployed with
hosted-engine --deploy --noansible
All went apparently good but after creating the master storage domain I see that the hosted engine domain is not automatically imported At the moment I have only one host.
ovirt-ha-agent status gives every 10 seconds: Nov 09 00:36:30 ovirtdemo01.localdomain.local ovirt-ha-agent[18407]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Unable to identify the OVF_STORE volume, falling back to initial
Please ensure you already added your first data domain for regular VMs
In engine.log I see every 15 seconds a dumpxml output ad the message:
2018-11-09 00:31:52,822+01 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7fcce3cb] null architecture type, replacing with x86_64, VM [HostedEngine]
see full below.
Any hint?
Hi Gianluca, unfortunately it's a known regression: it's currently tracked here https://bugzilla.redhat.com/1639604
In the mean time I'd suggest to use the new ansible flow witch is not affected by this issue or deploy with an engine-appliance shipped before 4.2.5 completing the upgrade on engine side only when everything is
expected.
Thanks Simone, I scratched and reinstalled using the 4.2. appliance and the default
vm.conf. there as option
(with ansible) executing the command:
hosted-engine --deploy
Gianluca
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N6XCZ3TZPCYVC4...
participants (3)
-
Gianluca Cecchi
-
Martin Sivak
-
Simone Tiraboschi